29 research outputs found

    Fingerprinting Web Users Through Font Metrics

    Full text link

    Case study: disclosure of indirect device fingerprinting in privacy policies

    Full text link
    Recent developments in online tracking make it harder for individuals to detect and block trackers. This is especially true for de- vice fingerprinting techniques that websites use to identify and track individual devices. Direct trackers { those that directly ask the device for identifying information { can often be blocked with browser configu- rations or other simple techniques. However, some sites have shifted to indirect tracking methods, which attempt to uniquely identify a device by asking the browser to perform a seemingly-unrelated task. One type of indirect tracking known as Canvas fingerprinting causes the browser to render a graphic recording rendering statistics as a unique identifier. Even experts find it challenging to discern some indirect fingerprinting methods. In this work, we aim to observe how indirect device fingerprint- ing methods are disclosed in privacy policies, and consider whether the disclosures are sufficient to enable website visitors to block the track- ing methods. We compare these disclosures to the disclosure of direct fingerprinting methods on the same websites. Our case study analyzes one indirect ngerprinting technique, Canvas fingerprinting. We use an existing automated detector of this fingerprint- ing technique to conservatively detect its use on Alexa Top 500 websites that cater to United States consumers, and we examine the privacy poli- cies of the resulting 28 websites. Disclosures of indirect fingerprinting vary in specificity. None described the specific methods with enough granularity to know the website used Canvas fingerprinting. Conversely, many sites did provide enough detail about usage of direct fingerprint- ing methods to allow a website visitor to reliably detect and block those techniques. We conclude that indirect fingerprinting methods are often technically difficult to detect, and are not identified with specificity in legal privacy notices. This makes indirect fingerprinting more difficult to block, and therefore risks disturbing the tentative armistice between individuals and websites currently in place for direct fingerprinting. This paper illustrates differences in fingerprinting approaches, and explains why technologists, technology lawyers, and policymakers need to appreciate the challenges of indirect fingerprinting.Accepted manuscrip

    Not All Browsers are Created Equal:Comparing Web Browser Fingerprintability

    Get PDF

    Browser Fingerprinting: How to Protect Machine Learning Models and Data with Differential Privacy?

    Get PDF
    As modern communication networks grow more and more complex, manually maintaining an overview of deployed soft- and hardware is challenging. Mechanisms such as fingerprinting are utilized to automatically extract information from ongoing network traffic and map this to a specific device or application, e.g., a browser. Active approaches directly interfere with the traffic and impose security risks or are simply infeasible. Therefore, passive approaches are employed, which only monitor traffic but require a well-designed feature set since less information is available. However, even these passive approaches impose privacy risks. Browser identification from encrypted traffic may lead to data leakage, e.g., the browser history of users. We propose a passive browser fingerprinting method based on explainable features and evaluate two privacy protection mechanisms, namely differentially private classifiers and differentially private data generation. With a differentially private Random Decision Forest, we achieve an accuracy of 0.877. If we train a non-private Random Forest on differentially private synthetic data, we reach an accuracy up to 0.887, showing a reasonable trade-off between utility and privacy

    Browser fingerprinting: how to protect machine learning models and data with differential privacy?

    Get PDF
    As modern communication networks grow more and more complex, manually maintaining an overview of deployed soft- and hardware is challenging. Mechanisms such as fingerprinting are utilized to automatically extract information from ongoing network traffic and map this to a specific device or application, e.g., a browser. Active approaches directly interfere with the traffic and impose security risks or are simply infeasible. Therefore, passive approaches are employed, which only monitor traffic but require a well-designed feature set since less information is available. However, even these passive approaches impose privacy risks. Browser identification from encrypted traffic may lead to data leakage, e.g., the browser history of users. We propose a passive browser fingerprinting method based on explainable features and evaluate two privacy protection mechanisms, namely differentially private classifiers and differentially private data generation. With a differentially private Random Decision Forest, we achieve an accuracy of 0.877. If we train a non-private Random Forest on differentially private synthetic data, we reach an accuracy up to 0.887, showing a reasonable trade-off between utility and privacy
    corecore