1,467 research outputs found

    Data-Driven and Game-Theoretic Approaches for Privacy

    Get PDF
    abstract: In the past few decades, there has been a remarkable shift in the boundary between public and private information. The application of information technology and electronic communications allow service providers (businesses) to collect a large amount of data. However, this ``data collection" process can put the privacy of users at risk and also lead to user reluctance in accepting services or sharing data. This dissertation first investigates privacy sensitive consumer-retailers/service providers interactions under different scenarios, and then focuses on a unified framework for various information-theoretic privacy and privacy mechanisms that can be learned directly from data. Existing approaches such as differential privacy or information-theoretic privacy try to quantify privacy risk but do not capture the subjective experience and heterogeneous expression of privacy-sensitivity. The first part of this dissertation introduces models to study consumer-retailer interaction problems and to better understand how retailers/service providers can balance their revenue objectives while being sensitive to user privacy concerns. This dissertation considers the following three scenarios: (i) the consumer-retailer interaction via personalized advertisements; (ii) incentive mechanisms that electrical utility providers need to offer for privacy sensitive consumers with alternative energy sources; (iii) the market viability of offering privacy guaranteed free online services. We use game-theoretic models to capture the behaviors of both consumers and retailers, and provide insights for retailers to maximize their profits when interacting with privacy sensitive consumers. Preserving the utility of published datasets while simultaneously providing provable privacy guarantees is a well-known challenge. In the second part, a novel context-aware privacy framework called generative adversarial privacy (GAP) is introduced. Inspired by recent advancements in generative adversarial networks, GAP allows the data holder to learn the privatization mechanism directly from the data. Under GAP, finding the optimal privacy mechanism is formulated as a constrained minimax game between a privatizer and an adversary. For appropriately chosen adversarial loss functions, GAP provides privacy guarantees against strong information-theoretic adversaries. Both synthetic and real-world datasets are used to show that GAP can greatly reduce the adversary's capability of inferring private information at a small cost of distorting the data.Dissertation/ThesisDoctoral Dissertation Electrical Engineering 201

    Fighting Online Click-Fraud Using Bluff Ads

    Get PDF
    Online advertising is currently the greatest source of revenue for many Internet giants. The increased number of specialized websites and modern profiling techniques, have all contributed to an explosion of the income of ad brokers from online advertising. The single biggest threat to this growth, is however, click-fraud. Trained botnets and even individuals are hired by click-fraud specialists in order to maximize the revenue of certain users from the ads they publish on their websites, or to launch an attack between competing businesses. In this note we wish to raise the awareness of the networking research community on potential research areas within this emerging field. As an example strategy, we present Bluff ads; a class of ads that join forces in order to increase the effort level for click-fraud spammers. Bluff ads are either targeted ads, with irrelevant display text, or highly relevant display text, with irrelevant targeting information. They act as a litmus test for the legitimacy of the individual clicking on the ads. Together with standard threshold-based methods, fake ads help to decrease click-fraud levels.Comment: Draf

    Interest-disclosing Mechanisms for Advertising are Privacy-Exposing (not Preserving)

    Full text link
    Today, targeted online advertising relies on unique identifiers assigned to users through third-party cookies--a practice at odds with user privacy. While the web and advertising communities have proposed interest-disclosing mechanisms, including Google's Topics API, as solutions, an independent analysis of these proposals in realistic scenarios has yet to be performed. In this paper, we attempt to validate the privacy (i.e., preventing unique identification) and utility (i.e., enabling ad targeting) claims of Google's Topics proposal in the context of realistic user behavior. Through new statistical models of the distribution of user behaviors and resulting targeting topics, we analyze the capabilities of malicious advertisers observing users over time and colluding with other third parties. Our analysis shows that even in the best case, individual users' identification across sites is possible, as 0.4% of the 250k users we simulate are re-identified. These guarantees weaken further over time and when advertisers collude: 57% of users are uniquely re-identified after 15 weeks of browsing, increasing to 75% after 30 weeks. While measuring that the Topics API provides moderate utility, we also find that advertisers and publishers can abuse the Topics API to potentially assign unique identifiers to users, defeating the desired privacy guarantees. As a result, the inherent diversity of users' interests on the web is directly at odds with the privacy objectives of interest-disclosing mechanisms; we discuss how any replacement of third-party cookies may have to seek other avenues to achieve privacy for the web

    The Price of Privacy - An Evaluation of the Economic Value of Collecting Clickstream Data

    Get PDF
    The analysis of clickstream data facilitates the understanding and prediction of customer behavior in e-commerce. Companies can leverage such data to increase revenue. For customers and website users, on the other hand, the collection of behavioral data entails privacy invasion. The objective of the paper is to shed light on the trade-off between privacy and the business value of cus- tomer information. To that end, the authors review approaches to convert clickstream data into behavioral traits, which we call clickstream features, and propose a categorization of these features according to the potential threat they pose to user privacy. The authors then examine the extent to which different categories of clickstream features facilitate predictions of online user shopping pat- terns and approximate the marginal utility of using more privacy adverse information in behavioral prediction models. Thus, the paper links the literature on user privacy to that on e-commerce analytics and takes a step toward an economic analysis of privacy costs and benefits. In par- ticular, the results of empirical experimentation with large real-world e-commerce data suggest that the inclusion of short-term customer behavior based on session-related information leads to large gains in predictive accuracy and business performance, while storing and aggregating usage behavior over longer horizons has comparably less value

    Behavioural verification: preventing report fraud in decentralized advert distribution systems

    Get PDF
    Service commissions, which are claimed by Ad-Networks and Publishers, are susceptible to forgery as non-human operators are able to artificially create fictitious traffic on digital platforms for the purpose of committing financial fraud. This places a significant strain on Advertisers who have no effective means of differentiating fabricated Ad-Reports from those which correspond to real consumer activity. To address this problem, we contribute an advert reporting system which utilizes opportunistic networking and a blockchain-inspired construction in order to identify authentic Ad-Reports by determining whether they were composed by honest or dishonest users. What constitutes a user's honesty for our system is the manner in which they access adverts on their mobile device. Dishonest users submit multiple reports over a short period of time while honest users behave as consumers who view adverts at a balanced pace while engaging in typical social activities such as purchasing goods online, moving through space and interacting with other users. We argue that it is hard for dishonest users to fake honest behaviour and we exploit the behavioural patterns of users in order to classify Ad-Reports as real or fabricated. By determining the honesty of the user who submitted a particular report, our system offers a more secure reward-claiming model which protects against fraud while still preserving the user's anonymity

    Refocusing Loyalty Programs in the Era of Big Data: A Societal Lens Paradigm

    Get PDF
    Big data and technological change have enabled loyalty programs to become more prevalent and complex. How these developments influence society has been overlooked, both in academic research and in practice. We argue why this issue is important and propose a framework to refocus loyalty programs in the era of big data through a societal lens. We focus on three aspects of the societal lens-inequality, privacy, and sustainability. We discuss how loyalty programs in the big data era impact each of these societal factors, and then illustrate how, by adopting this societal lens paradigm, researchers and practitioners can generate insights and ideas that address the challenges and opportunities that arise from the interaction between loyalty programs and society. Our goal is to broaden the perspectives of researchers and managers so they can enhance loyalty programs to address evolving societal needs

    Location reliability and gamification mechanisms for mobile crowd sensing

    Get PDF
    People-centric sensing with smart phones can be used for large scale sensing of the physical world by leveraging the sensors on the phones. This new type of sensing can be a scalable and cost-effective alternative to deploying static wireless sensor networks for dense sensing coverage across large areas. However, mobile people-centric sensing has two main issues: 1) Data reliability in sensed data and 2) Incentives for participants. To study these issues, this dissertation designs and develops McSense, a mobile crowd sensing system which provides monetary and social incentives to users. This dissertation proposes and evaluates two protocols for location reliability as a step toward achieving data reliability in sensed data, namely, ILR (Improving Location Reliability) and LINK (Location authentication through Immediate Neighbors Knowledge). ILR is a scheme which improves the location reliability of mobile crowd sensed data with minimal human efforts based on location validation using photo tasks and expanding the trust to nearby data points using periodic Bluetooth scanning. LINK is a location authentication protocol working independent of wireless carriers, in which nearby users help authenticate each other’s location claims using Bluetooth communication. The results of experiments done on Android phones show that the proposed protocols are capable of detecting a significant percentage of the malicious users claiming false location. Furthermore, simulations with the LINK protocol demonstrate that LINK can effectively thwart a number of colluding user attacks. This dissertation also proposes a mobile sensing game which helps collect crowd sensing data by incentivizing smart phone users to play sensing games on their phones. We design and implement a first person shooter sensing game, “Alien vs. Mobile User”, which employs techniques to attract users to unpopular regions. The user study results show that mobile gaming can be a successful alternative to micro-payments for fast and efficient area coverage in crowd sensing. It is observed that the proposed game design succeeds in achieving good player engagement
    • …
    corecore