61 research outputs found

    Case study: disclosure of indirect device fingerprinting in privacy policies

    Full text link
    Recent developments in online tracking make it harder for individuals to detect and block trackers. This is especially true for de- vice fingerprinting techniques that websites use to identify and track individual devices. Direct trackers { those that directly ask the device for identifying information { can often be blocked with browser configu- rations or other simple techniques. However, some sites have shifted to indirect tracking methods, which attempt to uniquely identify a device by asking the browser to perform a seemingly-unrelated task. One type of indirect tracking known as Canvas fingerprinting causes the browser to render a graphic recording rendering statistics as a unique identifier. Even experts find it challenging to discern some indirect fingerprinting methods. In this work, we aim to observe how indirect device fingerprint- ing methods are disclosed in privacy policies, and consider whether the disclosures are sufficient to enable website visitors to block the track- ing methods. We compare these disclosures to the disclosure of direct fingerprinting methods on the same websites. Our case study analyzes one indirect ngerprinting technique, Canvas fingerprinting. We use an existing automated detector of this fingerprint- ing technique to conservatively detect its use on Alexa Top 500 websites that cater to United States consumers, and we examine the privacy poli- cies of the resulting 28 websites. Disclosures of indirect fingerprinting vary in specificity. None described the specific methods with enough granularity to know the website used Canvas fingerprinting. Conversely, many sites did provide enough detail about usage of direct fingerprint- ing methods to allow a website visitor to reliably detect and block those techniques. We conclude that indirect fingerprinting methods are often technically difficult to detect, and are not identified with specificity in legal privacy notices. This makes indirect fingerprinting more difficult to block, and therefore risks disturbing the tentative armistice between individuals and websites currently in place for direct fingerprinting. This paper illustrates differences in fingerprinting approaches, and explains why technologists, technology lawyers, and policymakers need to appreciate the challenges of indirect fingerprinting.Accepted manuscrip

    Twenty Years of Web Scraping and the Computer Fraud and Abuse Act

    Get PDF
    Web scraping is a ubiquitous technique for extracting data from the World Wide Web, done through a computer script that will send tailored queries to websites to retrieve specific pieces of content. The technique has proliferated under the ever-expanding shadow of the Computer Fraud and Abuse Act (CFAA), which, among other things, prohibits obtaining information from a computer by accessing the computer without authorization or exceeding one\u27s authorized access. Unsurprisingly, many litigants have now turned to the CFAA in attempt to police against unwanted web scraping. Yet despite the rise in both web scraping and lawsuits about web scraping, practical advice about the legality of web scraping is hard to come by, and rarely extends beyond a rough combination of try not to get caught and talk to a lawyer. Most often the legal status of scraping is characterized as something just shy of unknowable, or a matter entirely left to the whims of courts, plaintiffs, or prosecutors. Uncertainty does indeed exist in the caselaw, and may stem in part from how courts approach the act of web scraping on a technical level. In the way that courts describe the act of web scraping, they misstate some of the qualities of scraping to suggest that the technique is inherently more invasive or burdensome. The first goal of this piece is to clarify how web scrapers operate, and explain why one should not think of web scraping as being inherently more burdensome or invasive than humans browsing the web. The second goal of this piece is to more fully articulate how courts approach the all-important question of whether a web scraper accesses a website without authorization under the CFAA. I aim to suggest here that there is a fair amount of madness in the caselaw, but not without some method. Specifically, this piece breaks down the twenty years of web scraping litigation (and the sixty-one opinions that this litigation has generated) into four rough phases of thinking around the critical access question. The first runs through the first decade of scraping litigation, and is marked with cases that adopt an expansive interpretation of the CFAA, with the potential to extend to all scrapers so long as a website can point to some mechanism that signaled access was unauthorized. The second, starting in the late 2000s, was marked by a narrowing of the CFAA and a focus more on the code-based controls of scraping, a move that tended to benefit scrapers. In the third phase courts have receded back to a broad view of the CFAA, brought about by the development of a revocation theory of unauthorized access. And most recently, spurred in part by the same policy concerns that led courts to initially constrain the CFAA in the first place, courts have begun to rethink this result. The conclusion of this piece identifies the broader questions about the CFAA and web scraping that courts must contend with in order to bring more harmony and comprehension to this area of law. They include how to deal with conflicting instructions on authorization coming different channels on the same website, how the analysis should interact with existing technical protocols that regulate web scraping, including the Robots Exclusion Standard, and what other factors beyond the wishes of the website host should govern application of the CFAA to unwanted web scraping

    New materials from waste and renewable oils

    Get PDF
    The work presented in this thesis represents the chemical modification of waste and renewable vegetable oils to yield monomers for polyurethane, azide-alkyne click and nitrile-oxide click polymerisations. Chapter 1 provides a brief introduction to use of waste materials for new products, following on to a more detailed overview of triglyceride chemistry, finishing with an introduction to ‘Click’ chemistry. Chapter 2 discusses the optimisation studies of acid catalysed ring-opening of epoxidised cocoa butter followed by polyurethane synthesis. Percentage of ring-opening was found to be influenced by the amount of phase-transfer catalyst, concentration of reaction and equivalents of acid. Mechanical properties (Young’s Modulus (YM), Tensile strength (TS) and Elongation at break (EoB)) were determined and thermal analysis (TGA, DSC) measured on cocoa butter based polyurethanes both with and without food-safe dyes as an alternative more environmentally friendly renewable oil source for polyurethane synthesis. Chapter 3 focuses on the use of azide-alkyne click chemistry to produce renewable polymers from dimeric fatty amides (capable of H bonding) with increasing linker length and azide functionality. Samples were synthesised from purified oleic acid and linoleic acid and cheaper, more commercially available rapeseed oil and soybean oil. Thermal properties (TGA, DSC) of copper mediated and thermally produced polymers were analysed and mechanical properties (YM, TS and EoB) of thermally produced polymers were also investigated showing increasing linker length increased elongation and decreased tensile strength and also showed the importance of H bonding between polymer chains drawn. Chapter 4 expands on azide-alkyne click polymerisation by synthesis of a range of monomers containing both azide and alkyne units therefore capable of homopolymerisation. Increasing chain length, azide functionality and hydrogen bonding possibilities were again tested using the same four starting materials as Chapter 3 as well as increasing cross-linking possibilities and results were found to compare with those established in Chapter 3. Chapter 5 concentrates on using nitrile oxide-alkyne click polymerisations as an alternative and safe method of producing renewable polymers derived from vegetable oils. Two approaches were used for polymerisations, base mediated and thermal mediated polymerisations with polymers produced subjected to thermal analysis (TGA, DSC). Chapter 6 describes the experimental and chemical analysis of the key reactions and processes described in the thesis

    Multi-regulation computing: examining the legal and policy questions that arise from secure multiparty computation

    Full text link
    This work examines privacy laws and regulations that limit disclosure of personal data, and explores whether and how these restrictions apply when participants use cryptographically secure multi-party computation (MPC). By protecting data during use, MPC can help to foster the positive effects of data usage while mitigating potential negative impacts of data sharing in scenarios where participants want to analyze data that is subject to one or more privacy laws, especially when these laws are in apparent conflict so data cannot be shared in the clear. But paradoxically, most adoptions of MPC to date involve data that is not subject to any formal privacy regulation. We posit that a major impediment to the adoption of MPC is the difficulty of mapping this new technology onto the design principles of data privacy laws. To address this issue and with the goal of spurring adoption of MPC, this work introduces the first systematic framework to reason about the extent to which secure multiparty computation implicates data privacy laws. Our framework revolves around three questions: a definitional question on whether the encodings still constitute ‘personal data,’ a process question about whether the act of executing MPC constitutes a data disclosure event, and a liability question about what happens if something goes wrong. We conclude by providing advice to regulators and suggestions to early adoptors to spur uptake of MPC.NSF 18-209 - National Science Foundation; CNS-1915763 - National Science Foundation; HR00112020021 - Department of Defense/DARPA; CNS-1801564 - National Science Foundation; CNS-1931714 - National Science Foundation; CNS-1718135 - National Science Foundationhttps://aloni.net/wp-content/uploads/2022/08/Multi-Regulation-Computing-Walsh-Varia-Cohen-Sellars-Bestavros-ACM-CSLAW-22.pdfAccepted manuscrip

    Maintenance cost implications of utilizing bathroom modules manufactured offsite

    Get PDF
    Though the benefits from using offsite technologies have been rehearsed, their uptake within the UK construction industry is slow. A critical barrier is the lack of cost data of using such technology. Another is the unsubstantiated perception that maintenance of offsite solutions is difficult and expensive. But, yet again, there appears to be no data publicly available on this topic. This knowledge gap is addressed by presenting the cost data of maintaining offsite and insitu bathrooms for student accommodation. The records of 732 maintenance jobs were investigated. These jobs span three years for 398 bathrooms, including precast concrete modules, Glass Reinforced Polyester (GRP) modules and insitu bathrooms. The results suggest that GRP modules required the lowest maintenance costs whilst insitu bathrooms were significantly more expensive to maintain. For offsite modules, drainage, toilets, vents and sink were identified as the main problematic areas for maintenance. The maintenance of insitu bathrooms was more complex and involved a wider range of problematic areas. The design imposed significant effects on the long-term cost of offsite bathrooms. Aspirations of clients need to be fully understood and integrated into design. The findings should facilitate the design decision-making of using offsite bathrooms for residential buildings

    Reduced loads of pre-existing Gill-associated virus (GAV) infection in juvenile Penaeus monodon injected with single or multiple GAV-specific dsRNAs

    Get PDF
    The ability of RNA interference (RNAi) based on injected dsRNA was investigated here for its ability to reduce the severity of pre-existing subclinical Gill-associated virus (GAV) infections in farm stocks of juvenile Black Tiger shrimp (Penaeus monodon). Following tail muscle injection of single or multiple long dsRNAs targeted to sequences positioned across the GAV ORF1a/1b replicase genes, pleopods were sampled sequentially from individuals at regular intervals over a 2. week period to track changes in GAV RNA loads by quantitative real-time RT-PCR. Mean GAV RNA amounts showed statistically significant (

    Effects of hospital facilities on patient outcomes after cancer surgery: an international, prospective, observational study

    Get PDF
    Background Early death after cancer surgery is higher in low-income and middle-income countries (LMICs) compared with in high-income countries, yet the impact of facility characteristics on early postoperative outcomes is unknown. The aim of this study was to examine the association between hospital infrastructure, resource availability, and processes on early outcomes after cancer surgery worldwide.Methods A multimethods analysis was performed as part of the GlobalSurg 3 study-a multicentre, international, prospective cohort study of patients who had surgery for breast, colorectal, or gastric cancer. The primary outcomes were 30-day mortality and 30-day major complication rates. Potentially beneficial hospital facilities were identified by variable selection to select those associated with 30-day mortality. Adjusted outcomes were determined using generalised estimating equations to account for patient characteristics and country-income group, with population stratification by hospital.Findings Between April 1, 2018, and April 23, 2019, facility-level data were collected for 9685 patients across 238 hospitals in 66 countries (91 hospitals in 20 high-income countries; 57 hospitals in 19 upper-middle-income countries; and 90 hospitals in 27 low-income to lower-middle-income countries). The availability of five hospital facilities was inversely associated with mortality: ultrasound, CT scanner, critical care unit, opioid analgesia, and oncologist. After adjustment for case-mix and country income group, hospitals with three or fewer of these facilities (62 hospitals, 1294 patients) had higher mortality compared with those with four or five (adjusted odds ratio [OR] 3.85 [95% CI 2.58-5.75]; p<0.0001), with excess mortality predominantly explained by a limited capacity to rescue following the development of major complications (63.0% vs 82.7%; OR 0.35 [0.23-0.53]; p<0.0001). Across LMICs, improvements in hospital facilities would prevent one to three deaths for every 100 patients undergoing surgery for cancer.Interpretation Hospitals with higher levels of infrastructure and resources have better outcomes after cancer surgery, independent of country income. Without urgent strengthening of hospital infrastructure and resources, the reductions in cancer-associated mortality associated with improved access will not be realised
    • …
    corecore