13,971 research outputs found

    Social Influence and Visual Attention in the Personalization Privacy Paradox for Social Advertising: An Eye Tracking Study

    Get PDF
    The personalization privacy paradox suggests that the personalization of advertising increases ad relevance but simultaneously triggers privacy concerns as firms make use of consumers\u27 information. We combine a lab experiment with eye tracking and survey methodology to investigate the role of informational social influence and visual attention in the personalization privacy paradox for social advertising. While previous research pointed towards social influence increasing consumers’ trust in advertisers, we find that social influence does not help to reduce consumer privacy concerns originating in personalization. Next, our findings contradict the presence of a negativity bias directing consumers\u27 attention to negatively perceived stimuli. We show that privacy concerns decrease consumers\u27 attention towards personalized ads, subsequently leading to a decrease in ad clicks. This finding supports a positive role of visual attention for advertising performance. We show that privacy concerns, triggered by personalization, negatively influence ad performance through a decrease in attention towards ads. Our analysis indicates that consumers need to process ad information sufficiently, i.e. dedicate a sufficient amount of attention to the ad, to actually experience privacy concerns

    Towards Query Logs for Privacy Studies: On Deriving Search Queries from Questions

    Get PDF
    Translating verbose information needs into crisp search queries is a phenomenon that is ubiquitous but hardly understood. Insights into this process could be valuable in several applications, including synthesizing large privacy-friendly query logs from public Web sources which are readily available to the academic research community. In this work, we take a step towards understanding query formulation by tapping into the rich potential of community question answering (CQA) forums. Specifically, we sample natural language (NL) questions spanning diverse themes from the Stack Exchange platform, and conduct a large-scale conversion experiment where crowdworkers submit search queries they would use when looking for equivalent information. We provide a careful analysis of this data, accounting for possible sources of bias during conversion, along with insights into user-specific linguistic patterns and search behaviors. We release a dataset of 7,000 question-query pairs from this study to facilitate further research on query understanding.Comment: ECIR 2020 Short Pape

    The Importance of Transparency and Willingness to Share Personal Information

    Get PDF
    This study investigates the extent to which individuals are willing to share their sensitive personal information with companies. The study examines whether skepticism can influence willingness to share information. Additionally, it seeks to determine whether transparency can moderate the relationship between skepticism and willingness to share and whether 1) companies perceived motives, 2) individual’s prior privacy violations, 3) individuals’ propensity to take risks, and 4) individuals self-efficacy act as antecedents of skepticism. Partial Least Squares (PLS) regression is used to examine the relationships between all the factors. The findings indicate that skepticism does have a negative impact on willingness to share personal information and that transparency can reduce skepticis

    Pyramid: Enhancing Selectivity in Big Data Protection with Count Featurization

    Full text link
    Protecting vast quantities of data poses a daunting challenge for the growing number of organizations that collect, stockpile, and monetize it. The ability to distinguish data that is actually needed from data collected "just in case" would help these organizations to limit the latter's exposure to attack. A natural approach might be to monitor data use and retain only the working-set of in-use data in accessible storage; unused data can be evicted to a highly protected store. However, many of today's big data applications rely on machine learning (ML) workloads that are periodically retrained by accessing, and thus exposing to attack, the entire data store. Training set minimization methods, such as count featurization, are often used to limit the data needed to train ML workloads to improve performance or scalability. We present Pyramid, a limited-exposure data management system that builds upon count featurization to enhance data protection. As such, Pyramid uniquely introduces both the idea and proof-of-concept for leveraging training set minimization methods to instill rigor and selectivity into big data management. We integrated Pyramid into Spark Velox, a framework for ML-based targeting and personalization. We evaluate it on three applications and show that Pyramid approaches state-of-the-art models while training on less than 1% of the raw data
    • …
    corecore