480 research outputs found
Employed Algorithms: A Labor Model of Corporate Liability for AI
The workforce is digitizing. Leading consultancies estimate that algorithmic systems will replace 45 percent of human-held jobs by 2030. One feature that algorithms share with the human employees they are replacing is their capacity to cause harm. Even today, corporate algorithms discriminate against loan applicants, manipulate stock markets, collude over prices, and cause traffic deaths. Ordinarily, corporate employers would be responsible for these injuries, but the rules for assessing corporate liability arose at a time when only humans could act on behalf of corporations. Those rules apply awkwardly, if at all, to silicon. Some corporations have already discovered this legal loophole and are rapidly automating business functions to limit their own liability risk.
This Article seeks a way to hold corporations accountable for the harms of their digital workforce: some algorithms should be treated, for liability purposes, as corporate employees. Drawing on existing functional characterizations of employment, the Article defines the concept of an “employed algorithm” as one over which a corporation exercises substantial control and from which it derives substantial benefits. If a corporation employs an algorithm that causes criminal or civil harm, the corporation should be liable just as if the algorithm were a human employee. Plaintiffs and prosecutors could then leverage existing, employee-focused liability rules to hold corporations accountable when the digital workforce transgresses
Limiting Identity in Criminal Law
People change with time. Personalities, values, and preferences shift incrementally as people accrue life experience, discover new sources of meaning, and form or lose memories. Eventually, accumulated psychological changes not only reshape how someone relates to the world about her, but also who she is as a person. This transience of human identity has profound implications for criminal law. Previous legal scholarship on personal identity has assumed that only abrupt tragedy and disease can change who we are. Psychologists, however, now know that the ordinary processes of growth, maturation, and decline alter us all in fundamental respects. Many young adults find it hard to identify with their adolescent past. Senior citizens often reflect similarly on their middle years. However tightly we hold on to the people we are today, at some tomorrow we inevitably find ourselves changed.
Criminal justice has not come to grips with this aspect of the human condition. The law—by imposing lengthy sentences, allowing enduring consequences of conviction, and punishing long bygone violations—assumes that people’s identities remain fixed from birth to death. If people do change with time, these policies must violate criminal law’s most basic commitment to prosecute and punish present-day people only for crimes they (and not some different past person) committed.
Drawing on contemporary psychology and philosophy of personal identity, this Article concludes that criminal law punishes too often and too severely. Lengthy prison terms risk incarcerating people past the point at which their identity changes. Elderly inmates who have languished on death row for decades should have a new claim for release—that they are now different people, innocent of the misdeeds of yesteryear. One-time felons should recover lost civil rights sooner. And defendants should benefit from juvenile process well into their twenties, when personal identity first begins to stabilize. By confronting the challenges posed by the limits of personal identity, the criminal law can become more just and humane
Successor Identity
The law of successor criminal liability is simple-corporate successors are liable for the crimes of their predecessors. Always. Any corporation that results from any merger, consolidation, spin-off, etc., is on the hook for all the crimes of all the corporations that went into the process. Such a coarse-grained, oneÂ-track approach fails to recognize that not all reorganizations are cut from the same cloth. As a result, it skews corporate incentives against reorganizing in more socially beneficial ways. It also risks punishing corporate successors unjustly
The Corporate Insanity Defense
Corporate criminal justice rests on the fiction that corporations possess “minds” capable of instantiating culpable mens rea. The retributive and deterrent justifications for punishing criminal corporations are strongest when those minds are well-ordered. In such cases misdeeds are most likely to reflect malice, and sanctions are most likely to have their intended preventive benefits. But what if a corporate defendant’s mind is disordered? Organizational psychology and economics have tools to identify normally functioning organizations that are fully accountable for the harms they cause. These disciplines can also diagnose dysfunctional organizations where the threads of accountability may have frayed and where sanctions would not deter. Punishing such corporations undermines the goals of criminal law, leaves victim interests unaddressed, and is unfair to corporate stakeholders.
This Article argues that some corporate criminal defendants should be able to raise the insanity defense. Statutory text makes the insanity defense available to all qualifying defendants. When a corporate criminal defendant’s mind is sufficiently disordered, basic criminal law purposes also support the defense. Corporate crime in these cases may trace to dysfunctional systems or subversive third parties rather than to corporate malice. For example, individual corporate employees may thwart well-meaning corporate policies to pursue personal advantage at the expense of the corporation itself. Corporations then may seem more like victims of their own misconduct rather than perpetrators of it.
Justice and prevention favor treatment of insane corporations rather than punishment. Recognizing the corporate insanity defense would better serve victims’ and stakeholders’ interests in condemning and preventing corporate misconduct. Treatment would create an opportunity for government experts to reform dysfunctional corporations in a way that predominant modes of corporate punishment cannot. Effective reform takes victims seriously by minimizing the chance that others will be harmed. It also spares corporate stakeholders unnecessary punishment for corporate misconduct that could be sanctioned in more constructive ways
Corporate Criminal Minds
In order to commit the vast majority of crimes, corporations must, in some sense, have mental states. Lawmakers and scholars assume that factfinders need fundamentally different procedures for attributing mental states to corporations and individuals. As a result, they saddle themselves with unjustifiable theories of mental state attribution, like respondeat superior, that produce results wholly at odds with all the major theories of the objectives of criminal law.
This Article draws on recent findings in cognitive science to develop a new, comprehensive approach to corporate mens rea that would better allow corporate criminal law to fulfill its deterrent, retributive, and expressive aims. It does this by letting factfinders attribute mental states to corporations at trial as they ordinarily do to similar groups out of the courtroom. Under this new approach, factfinders would be asked to treat corporate defendants much like natural person defendants. Rather than atomize corporations into individual employees, factfinders would view them holistically. Then, factfinders could do just what they do for natural people—in light of surrounding circumstances and other corporate acts, infer what mental state most likely accompanied the act at issue. Such a theory harmonizes with recent cognitive scientific findings on mental state and responsibility attribution, developments that corporate liability scholars have mostly ignored
Vicarious Liability for AI
When an algorithm harms someone—say by discriminating against her, exposing her personal data, or buying her stock using inside information—who should pay? If that harm is criminal, who deserves punishment? In ordinary cases, when A harms B, the first step in the liability analysis turns on what sort of thing A is. If A is a natural phenomenon, like a typhoon or mudslide, B pays, and no one is punished. If A is a person, then A might be liable for damages and sanction. The trouble with algorithms is that neither paradigm fits. Algorithms are trainable artifacts with “off” switches, not natural phenomena. They are not people either, as a matter of law or metaphysics.
An appealing way out of this dilemma would start by complicating the standard A-harms-B scenario. It would recognize that a third party, C, usually lurks nearby when an algorithm causes harm, and that third party is a person (legal or natural). By holding third parties vicariously accountable for what their algorithms do, the law could promote efficient incentives for people who develop or deploy algorithms and secure just outcomes for victims.
The challenge is to find a model of vicarious liability that is up to the task. This Essay provides a set of criteria that any model of vicarious liability for algorithmic harms should satisfy. The criteria cover a range of desiderata: from ensuring good outcomes, to maximizing realistic prospects for implementation, to advancing programming values such as explainability. Though relatively few in number, the criteria are demanding. Most available models of vicarious liability fail them. Nonetheless, the Essay ends on an optimistic note. The shortcomings of the models considered below hold important lessons for uncovering a more promising alternative
Branding Corporate Criminals
Corporate punishment has a branding problem. Criminal sanctions should call out wrongdoing and condemn wrongdoers. In a world where generic corporate misconduct is a daily affair, conviction singles out truly contemptible practices from merely sharp, unproductive, or undesirable ones. In this way, criminal law gives victims the recognition they deserve, deters future wrongdoers who want to preserve their good name, and publicly reinforces society’s most treasured values.
Unfortunately, corporate punishment falls far short of all these communicative ambitions. For punishment to convey its intended message, society must be able to hear about it. When courts convict individuals, everyone understands that the conviction places a mark of enduring stigma: “felon,” “thief,” “murderer,” and “fraudster.” The state reinforces this communiqué by reserving its harshest and most degrading treatment for individual criminals, caging them and possibly killing them. Corporate punishment, by contrast, is a fleeting affair diluted by civil and administrative off-ramps, public relations spin, and a frenetic media environment. In today’s criminal justice system, it can be hard to identify who the corporate criminals even are. Unsurprisingly, corporations view criminal charges as inconvenient economic uncertainties and criminal fines as mere costs of doing business. Public perceptions have largely followed suit.
Corporate criminal law could disrupt this perverse dynamic by adopting a new sanction that would “brand” corporate criminals. Although branding sanctions could take many forms—different visual marks of varying size—this Article calls for, at a minimum, appending a criminal designation, ⓕ, to corporate felons’ legal names and mandating its appearance on products and communications. This “corporate criminal brand” would stand as a twenty-first century corporate reimagining of its medieval corporal namesake. Lawmakers rightly rejected physical brands on individual criminals long ago. The criminal justice landscape is different for corporations, which feel no pain and have no dignity interests. Unlike monetary fines, corporate criminal branding would unambiguously signal a corporation’s criminal status to outside observers. By forcibly integrating corporations’ criminal identity into their public image, criminal law might finally have a way to recognize victims and strike at what corporations value most
- …