730 research outputs found

    Regulating Habit-Forming Technology

    Get PDF
    Tech developers, like slot machine designers, strive to maximize the user’s “time on device.” They do so by designing habit-forming products— products that draw consciously on the same behavioral design strategies that the casino industry pioneered. The predictable result is that most tech users spend more time on device than they would like, about five hours of phone time a day, while a substantial minority develop life-changing behavioral problems similar to problem gambling. Other countries have begun to regulate habit-forming tech, and American jurisdictions may soon follow suit. Several state legislatures today are considering bills to regulate “loot boxes,” a highly addictive slot-machine- like mechanic that is common in online video games. The Federal Trade Commission has also announced an investigation into the practice. As public concern mounts, it is surprisingly easy to envision consumer regulation extending beyond video games to other types of apps. Just as tobacco regulations might prohibit brightly colored packaging and fruity flavors, a social media regulation might limit the use of red notification badges or “streaks” that reward users for daily use. It is unclear how much of this regulation could survive First Amendment scrutiny; software, unlike other consumer products, is widely understood as a form of protected “expression.” But it is also unclear whether well-drawn laws to combat compulsive technology use would seriously threaten First Amendment values. At a very low cost to the expressive interests of tech companies, these laws may well enhance the quality and efficacy of online speech by mitigating distraction and promoting deliberation

    Protecting Consumer Data Privacy with Arbitration

    Get PDF

    Privacy\u27s Trust Gap: A Review

    Get PDF

    Protecting Public Health Amidst Data Theft, Sludge, and Dark Patterns: Overcoming the Constitutional Barriers to Health Information Regulations

    Get PDF
    Public health has grown to over $4.1 trillion in spending in the past year, yet for millions of people, their health care is ineffective and sometimes harmful. New technologies have improved health access and treatment, but they can expose an individual’s personal health information to theft and misuse. There is little or no regulation for the reuse of data once it has been lawfully collected for general purposes. Any observer can create a detailed personal diary of an individual or a population by building from a mosaic of inferential data—such as lawfully obtained zip code information, non-regulated health care application data, purchasing patterns, location information, and social media engagement. Using behavioral economics, companies manipulate the public to make unhealthy lifestyle choices and promote health care scams. The FTC has labeled these practices as dark patterns, designed to nudge consumers into overpayments and choices that maximize revenue rather than wellness. The article first addresses the nature of the health care threat posed by the unregulated marketplace and the limited role the FTC plays in stopping unfair and deceptive practices that harm the public. The article next addresses the evolution of commercial speech and the extent to which the FTC will be able to expand its protections in light of recent case law. The article identifies new approaches for the FTC to expand its regulatory protection in a manner consistent with the heightened scrutiny applied by the Supreme Court. Finally, while calling for increased enforcement that meets First Amendment scrutiny, the article also promotes a new governmental strategy to meaningfully fund public health information with sufficient resources to overcome the existing public health disinformation industry to provide accurate, timely, and behaviorally motivating information to the public in order to save lives and promote better health. Recognizing that the police power is insufficient to stem the tide of disinformation, the article calls for a comprehensive approach to producing public health information that benefits all members of the public

    The Case for Establishing a Collective Perspective to Address the Harms of Platform Personalization

    Get PDF
    Personalization on digital platforms drives a broad range of harms, including misinformation, manipulation, social polarization, subversion of autonomy, and discrimination. In recent years, policy makers, civil society advocates, and researchers have proposed a wide range of interventions to address these challenges. This Article argues that the emerging toolkit reflects an individualistic view of both personal data and data-driven harms that will likely be inadequate to address growing harms in the global data ecosystem. It maintains that interventions must be grounded in an understanding of the fundamentally collective nature of data, wherein platforms leverage complex patterns of behaviors and characteristics observed across a large population to draw inferences and make predictions about individuals. Using the lens of the collective nature of data, this Article evaluates various approaches to addressing personalization-driven harms under current consideration. It also frames concrete guidance for future legislation in this space and for meaningful transparency that goes far beyond current transparency proposals. It offers a roadmap for what meaningful transparency must constitute: a collective perspective providing a third party with ongoing insight into the information gathered and observed about individuals and how it correlates with any personalized content they receive across a large, representative population. These insights would enable the third party to understand, identify, quantify, and address cases of personalization-driven harms. This Article discusses how such transparency can be achieved without sacrificing privacy and provides guidelines for legislation to support the development of such transparency

    The Cautionary Tale of the Failed 2002 FTC/DOJ Merger Clearance Accord

    Get PDF
    Antitrust law in the United States is the patchwork result of over two hundred years of evolving and often conflicting views of the government\u27s proper role in regulating business. Depending upon the social and business climate of the era and the economic philosophies of Congress, the President, and the judiciary, federal antitrust jurisdiction has waxed and waned. The result is the current system wherein the Department of Justice Antitrust Division ( Antitrust Division ) and the Federal Trade Commission ( FTC ) share dual jurisdiction to enforce the federal antitrust laws. However, in the push and pull of the changing eras, the intersection of the two agencies\u27 jurisdiction has become hazy and often troublesome. Nowhere is the uncertainty more evident than in the process by which the two agencies decide which will review and investigates a proposed merger
    • …
    corecore