Recognizing Operators’ Duties to Properly Select and Supervise AI Agents – A (Better?) Tool for Algorithmic Accountability

Abstract

In November of 2020, the Privacy Commissioner of Canada proposed creating GDPR-inspired rights for decision subjects and allowing financial penalties for violations of those rights. Shortly afterward, the proposal to create a right to an explanation for algorithmic decisions was incorporated into Bill C-11, the Digital Charter Implementation Act. This commentary proposes that creating duties for operators to properly select and supervise artificial agents would be a complementary, and potentially more effective, accountability mechanism than creating a right to an explanation. These duties would be a natural extension of employers’ duties to properly select and retain human employees. Allowing victims to recover under theories of negligent hiring or supervision of AI-system-as-agents would reflect their increasing (but less than full) autonomy and avoid some of the challenges that victims face in proving the foreseeability elements of other liability theories

    Similar works

    Full text

    thumbnail-image

    Available Versions