66 research outputs found

    Liability for AI Decision-Making: Some Legal and Ethical Considerations

    Get PDF
    The creation and commercialization of these systems raise the question of how liability risks will play out in real life. However, as technical advancements have outpaced legal actions, it is unclear how the law will treat AI systems. This Article briefly addresses the legal ramifications and liability risks associated with reliance on—or delegation to—AI systems, and it sketches a framework suggesting how we can address the question of whether AI merits a new approach to deal with the liability challenges it raises when humans remain “in” or “on” the loop

    Human control over automation: EU Policy and AI Ethics

    Get PDF
    In this article I problematize the use of algorithmic decision-making (ADM) applications to automate legal decision-making processes from the perspective of the European Union (EU) policy on trustworthy artificial intelligence (AI). Lately, the use of ADM systems across various fields, ranging from public to private, from criminal justice to credit scoring, has given rise to concerns about the negative consequences that data-driven technologies have in reinforcing and reinterpreting existing societal biases. This development has led to growing demand for ethical AI, often perceived to require human control over automation. By engaging in discussions of human-computer interaction and in post-structural policy analysis, I examine EU policy proposals to address the problematizations of AI through human oversight. I argue that the relevant policy documents do not reflect the results of earlier research which have undeniably demonstrated the shortcomings of human control over automation, which in turn leads to the reproduction of the harmful dichotomy of human versus machine in EU policy. Despite its shortcomings, the emphasis on human oversight reflects broader fears surrounding loss of control, framed as ethical concerns around digital technologies. Critical examination of these fears reveals an inherent connection between human agency and the legitimacy of legal decision-making that socio-legal scholarship needs to address.Peer reviewe

    EVALUATION OF A NON-TIMED ARTIFICIAL INSEMINATION PRACTICE APPLIED TO SUFFOLK EWES DURING EARLY BREEDING SEASON

    Get PDF
    The objective of this study was to determine if the AM/PM artificial insemination procedure in cattle works in sheep. Detection of estrus was done using vasectomized rams that were fitted with marking harnesses. Once detection occurred ewes were artificially inseminated 12 to 24 hours after detection. Artificial insemination occurred from August 24 until September 12, 2014 where a total of 49 Suffolk ewe were inseminate. Determination of ewes successful artificially insemination was done by use of a ram fitted with a marking harness and use of ultrasound equipment at completion of the project. It was concluded that there is no difference between artificial insemination at 12 hours or at 24 hours after detection of estrous

    The Future of AI Accountability in the Financial Markets

    Get PDF
    Consumer interaction with the financial market ranges from applying for credit cards, to financing the purchase of a home, to buying and selling securities. And with each transaction, the lender, bank, and brokerage firm are likely utilizing artificial intelligence (AI) behind the scenes to augment their operations. While AI’s ability to process data at high speeds and in large quantities makes it an important tool for financial institutions, it is imperative to be attentive to the risks and limitations that accompany its use. In the context of financial markets, AI’s lack of decision-making transparency, often called the “black box problem,” along with AI’s dependence on quality data, present additional complexities when considering the aggregate effect of algorithms deployed in the market. Owing to these issues, the benefits of AI must be weighed against the particular risks that accompany the spread of this technology throughout the markets. Financial regulation, as it stands, is complex, expensive, and often involves overlapping regulations and regulators. Thus far, financial regulators have responded by publishing guidance and standards for firms utilizing AI tools, but they have stopped short of demanding access to source codes, setting specific standards for developers, or otherwise altering traditional regulatory frameworks. While regulators are no strangers to regulating new financial products or technology, fitting AI within the traditional frameworks of prudential regulation, registration requirements, supervision, and enforcement actions leaves concerning gaps in oversight. This Article examines the suitability of the current financial regulatory frameworks for overseeing AI in the financial markets. It suggests that regulators consider developing multi-faceted approaches to promote AI accountability. This Article recognizes the potential harms and likelihood for regulatory arbitrage if these regulatory gaps remain unattended and thus suggests focusing on key elements for future regulation—namely, the human developers and regulation of data to truly “hold AI accountable.” Therefore, holding AI accountable requires identifying the different ways in which sophisticated algorithms may cause harm to the markets and consumers if ineffectively regulated, and developing an approach that can flexibly respond to these broad concerns. Notably, this Article cautions against reliance on self-regulation and recommends that future policies take an adaptive approach to address current and future AI technologies

    Why We Should Have Seen That Coming: Comments on Microsoft’s Tay “Experiment,” and Wider Implications

    Get PDF
    In this paper we examine the case of Tay, the Microsoft AI chatbot that was launched in March, 2016. After less than 24 hours, Microsoft shut down the experiment because the chatbot was generating tweets that were judged to be inappropriate since they included racist, sexist, and anti-Semitic language. We contend that the case of Tay illustrates a problem with the very nature of learning software (LS is a term that describes any software that changes its program in response to its interactions) that interacts directly with the public, and the developer’s role and responsibility associated with it. We make the case that when LS interacts directly with people or indirectly via social media, the developer has additional ethical responsibilities beyond those of standard software. There is an additional burden of care
    • …
    corecore