6 research outputs found

    The Curious Case of Uncurious Creation

    Get PDF
    This paper seeks to answer the question: Can contemporary forms of artificial intelligence be creative? To answer this question, I consider three conditions that are commonly taken to be necessary for creativity. These are novelty, value, and agency. I argue that while contemporary AI models may have a claim to novelty and value, they cannot satisfy the kind of agency condition required for creativity. From this discussion, a new condition for creativity emerges. Creativity requires curiosity, a motivation to pursue epistemic goods. I argue that contemporary AI models do not satisfy this new condition. Because they lack both agency and curiosity, it is a mistake to attribute the same sort of creativity to AI that we prize in humans. Finally, I consider the question of whether these AI models stand to make human creativity in the arts and sciences obsolete, despite not being creative themselves. I argue, optimistically, that this is unlikely

    Moral responsibility for unforeseen harms caused by autonomous systems

    Get PDF
    Autonomous systems are machines which embody Artificial Intelligence and Machine Learning and which take actions in the world, independently of direct human control. Their deployment raises a pressing question, which I call the 'locus of moral responsibility' question: who, if anyone, is morally responsible for a harm caused directly by an autonomous system? My specific focus is moral responsibility for unforeseen harms. First, I set up the 'locus of moral responsibility' problem. Unforeseen harms from autonomous systems create a problem for what I call the Standard View, rooted in common sense, that human agents are morally responsible. Unforeseen harms give credence to the main claim of ‘responsibility gap’ arguments – that humans do not meet the control and knowledge conditions of responsibility sufficiently to warrant such an ascription. Second, I argue a delegation framework offers a powerful route for answering the 'locus of moral responsibility' question. I argue that responsibility as attributability traces to the human principals who made the decision to delegate to the system, notwithstanding a later suspension of control and knowledge. These principals would also be blameworthy if their decision to delegate did not serve a purpose that morally justified the subsequent risk- imposition in the first place. Because I argue that different human principals share moral responsibility, I defend a pluralist Standard View. Third, I argue that, while today’s autonomous systems do not meet the agential condition for moral responsibility, it is neither conceptually incoherent nor physically impossible that they might. Because I take it to be a contingent and not a necessary truth that human principals exclusively bear moral responsibility, I defend a soft, pluralist Standard View. Finally, I develop and sharpen my account in response to possible objections, and I explore its wider implications
    corecore