36 research outputs found

    Gonzalez v. Google: The Case for Protecting Targeted Recommendations

    Get PDF
    Does Section 230 of the Communications Decency Act protect online platforms (e.g., Facebook, YouTube, and Twitter) when they use recommendation algorithms? Lower courts upheld platforms’ immunity, notwithstanding notable dissenting opinions. The Supreme Court considers this question in Gonzalez v Google, LLC. Plaintiffs invite the Court to analyze “targeted recommendations” generically and to revoke Section 230 immunity for all recommended content. We think this would be a mistake. This Article contributes to existing scholarship about Section 230 and online speech governance by adding much needed clarity to the desirable—and undesirable—regulation of recommendation algorithms. Specifically, this Article explains the technology behind algorithmic recommendations, the questions it raises for Section 230 immunity, and the stakes in Gonzalez. It opposes generically revoking Section 230 immunity for all uses of recommendation algorithms. Instead, it illustrates and defends a nuanced approach for the desired outcome of Gonzalez and for future possible regulation of recommendation algorithms. Copyrigh

    Risk and Rights in Transatlantic Data Transfers: EU Privacy Law, U.S. Surveillance, and the Search for Common Ground

    Get PDF
    Privacy advocates rightly view the Court of Justice of the European Union (CJEU) decision in Data Protection Commissioner v. Facebook Ireland Ltd. and Maximilian Schrems (Schrems II) as a landmark. But, one stakeholder’s landmark is another’s headache. The CJEU’s decision invalidated the EU-U.S. Privacy Shield agreement governing transatlantic transfers of personal data. Citing U.S. surveillance, the CJEU found that data transfers lacked adequate privacy protections under the EU’s General Data Protection Regulation (GDPR). The Schrems II decision thus clouded the future of data transfers that help drive the global economy. This Article offers a hybrid approach to safeguard privacy rights and ensure the viability of transatlantic data flows. The Article’s hybrid approach is an alternative to two less promising ways of reading the CJEU’s groundbreaking decision. The European Data Protection Board (EDPB) issued recommendations adopting a de facto absolutist view of the duties imposed by Schrems II. The EDPB guidance narrows the role of risk assessments that gauge the probability of U.S. surveillance of particular data. The EDPB places greater stock in technical measures, such as steep EU-centered encryption that thwart U.S. surveillance and impede access for U.S. firms. This unduly strict approach undermines the whole point of transatlantic data transfers. Another response to Schrems II takes a “don’t worry, be happy” tack. Heralds of optimism assure audiences on both sides of the Atlantic that most transatlantic data transfers are immune as a matter of law from U.S. surveillance, including collection under section 702 of the Foreign Intelligence Surveillance Act (FISA) or Executive Order 1233 (EO 12333). Unfortunately for this optimistic turn, U.S. surveillance authorities are sufficiently broad to reach many communications by EU individuals. In particular, section 702’s provision for collecting communications related to U.S. “foreign affairs” lacks any intelligible limiting principle or specific review of targeting decisions. The U.S. Foreign Intelligence Surveillance Court (FISC) does not approve every target under section 702, although it has the power to scrutinize targeting procedures. Collection under EO 12333 is even broader and not subject to FISC review. In sum, surveillance optimism is a rhetorical trope, not a legal strategy. Navigating between the EDPB’s strict approach and the heralds’ unfounded optimism, this Article proposes a hybrid model. The hybrid outlines a risk-assessment method based on U.S. export controls, which have successfully managed exports of sensitive technology for decades. This model can also be a template for managing transfers of sensitive personal data. In addition, the hybrid model proposes bolstering substantive and institutional safeguards in U.S. law. For example, the Article proposes an Algorithmic Rights Court (ARC) that would probe targeting decisions under both section 702 and EO 12333. Through more precise risk assessment and reinforced institutional and substantive protections, the hybrid model preserves privacy and supports a sustainable transatlantic data transfer regime

    Anonymization and Risk

    Get PDF
    Perfect anonymization of data sets that contain personal information has failed. But the process of protecting data subjects in shared information remains integral to privacy practice and policy. While the deidentification debate has been vigorous and productive, there is no clear direction for policy. As a result, the law has been slow to adapt a holistic approach to protecting data subjects when data sets are released to others. Currently, the law is focused on whether an individual can be identified within a given set. We argue that the best way to move data release policy past the alleged failures of anonymization is to focus on the process of minimizing risk of reidentification and sensitive attribute disclosure, not preventing harm. Process-based data release policy, which resembles the law of data security, will help us move past the limitations of focusing on whether data sets have been “anonymized.” It draws upon different tactics to protect the privacy of data subjects, including accurate deidentification rhetoric, contracts prohibiting reidentification and sensitive attribute disclosure, data enclaves, and query-based strategies to match required protections with the level of risk. By focusing on process, data release policy can better balance privacy and utility where nearly all data exchanges carry some risk

    Privacy and Security in the Cloud: Some Realism About Technical Solutions to Transnational Surveillance in the Post-Snowden Era

    Get PDF
    Since June 2013, the leak of thousands of classified documents regarding highly sensitive U.S. surveillance activities by former National Security Agency (NSA) contractor Edward Snowden has greatly intensified discussions of privacy, trust, and freedom in relation to the use of global computing and communication services. This is happening during a period of ongoing transition to cloud computing services by organizations, businesses, and individuals. There has always been a question of inherent in this transition: are cloud services sufficiently able to guarantee the security of their customers’ data as well s the proper restrictions on access by third parties, including governments? While worries over government access to data in the cloud is a predominate part of the ongoing debate over the use of cloud serives, the Snowden revelations highlight that intelligence agency operations pose a unique threat to the ability of services to keep their customers’ data out of the hands of domestic as well as foreign governments. The search for a proper response is ongoing, from the perspective of market players, governments, and civil society. At the technical and organizational level, industry players are responding with the wider and more sophisticated deployment of encryption as well as a new emphasis on the use of privacy enhancing technologies and innovative architectures for securing their services. These responses are the focus of this Article, which contributes to the discussion of transnational surveillance by looking at the interaction between the relevant legal frameworks on the one hand, and the possible technical and organizational responses of cloud service providers to such surveillance on the other. While the Article’s aim is to contribute to the debate about government surveillance with respect to cloud services in particular, much of the discussion is relevant for Internet services more broadly

    The Effect of Medicare Eligibility on Spousal Insurance Coverage

    Get PDF
    A majority of married couples in the United States take advantage of the fact that employers often provide health insurance coverage to spouses. When the older spouses become eligible for Medicare, however, many of them can no longer provide their younger spouses with coverage. In this paper, we study how spousal eligibility for Medicare affects the health insurance and health care access of the younger spouse. We find spousal eligibility for Medicare results in the younger spouse having worse insurance coverage and reduced access to health care services

    Gonzalez v. Google: The Case for Protecting Targeted Recommendations

    No full text
    Does Section 230 of the Communications Decency Act protect online platforms (e.g., Facebook, YouTube, and Twitter) when they use recommendation algorithms? Lower courts upheld platforms’ immunity, notwithstanding notable dissenting opinions. The Supreme Court considers this question in Gonzalez v Google, LLC. Plaintiffs invite the Court to analyze “targeted recommendations” generically and to revoke Section 230 immunity for all recommended content. We think this would be a mistake. This Article contributes to existing scholarship about Section 230 and online speech governance by adding much needed clarity to the desirable—and undesirable—regulation of recommendation algorithms. Specifically, this Article explains the technology behind algorithmic recommendations, the questions it raises for Section 230 immunity, and the stakes in Gonzalez. It opposes generically revoking Section 230 immunity for all uses of recommendation algorithms. Instead, it illustrates and defends a nuanced approach for the desired outcome of Gonzalez and for future possible regulation of recommendation algorithms. Copyrigh

    Anonymization and Risk

    No full text
    Perfect anonymization of data sets that contain personal information has failed. But the process of protecting data subjects in shared information remains integral to privacy practice and policy. While the deidentification debate has been vigorous and productive, there is no clear direction for policy. As a result, the law has been slow to adapt a holistic approach to protecting data subjects when data sets are released to others. Currently, the law is focused on whether an individual can be identified within a given set. We argue that the best way to move data release policy past the alleged failures of anonymization is to focus on the process of minimizing risk of reidentification and sensitive attribute disclosure, not preventing harm. Process-based data release policy, which resembles the law of data security, will help us move past the limitations of focusing on whether data sets have been “anonymized.” It draws upon different tactics to protect the privacy of data subjects, including accurate deidentification rhetoric, contracts prohibiting reidentification and sensitive attribute disclosure, data enclaves, and query-based strategies to match required protections with the level of risk. By focusing on process, data release policy can better balance privacy and utility where nearly all data exchanges carry some risk
    corecore