1,147 research outputs found

    The Evolution of Gendered Software: Products, Scientific Reasoning, Criticism, and Tools

    Get PDF
    Over the past 7 decades, gendered software has become globally established. In this theoretical distribution, I outline the evolution of gendered software. The journey of gendered software started with the raw idea fueled by Alan Turing’s imitation game in the 1950s. And only shortly thereafter, in the 1960s and 1970s, the first gendered software products like Joseph Weizenbaum’s ELIZA were developed. Thus, academia took its time to not only explore technological aspects, but to further investigate the matter of gender in the 1990s CASA-paradigm (Nass et al., 1994) and Media Equation (Reeves & Nass, 1996). As these theories reasoned the social impact of gendered software, voice assistants of the 2010s provided to be real-world examples stirring criticism. By posing the question of “boy or girl” through the decades, I take a deeper look at aspects such as raison d’être, realization, consequences, and future possibilities that ultimately challenge the applied gender binary. In doing so, it becomes evident that gendered software is situated in the bigger context of gender inequalities. Therefore, I propose to consider the listing of (1) product name, (2) voice, and (3) personality traits as decisive features forming to be powerful tools in the process of gendering software

    A Human Being Wrote This Law Review Article: GPT-3 and the Practice of Law

    Get PDF
    Artificial intelligence tools can now “write” in such a sophisticated manner that they fool people into believing that a human wrote the text. None are better at writing than GPT-3, released in 2020 for beta testing and coming to commercial markets in 2021. GPT-3 was trained on a massive dataset that included scrapes of language from sources ranging from the NYTimes to Reddit boards. And so, it comes as no surprise that researchers have already documented incidences of bias where GPT-3 spews toxic language. But because GPT-3 is so good at “writing,” and can be easily trained to write in a specific voice — from classic Shakespeare to Taylor Swift — it is poised for wide adoption in the field of law. This Article explores the ethical considerations that will follow from GPT-3’s introduction into lawyers’ practices. GPT-3 is new, but the use of AI in the field of law is not. AI has already thoroughly suffused the practice of law. GPT-3 is likely to take hold as well, generating some early excitement that it and other AI tools could help close the access to justice gap. That excitement should nevertheless be tempered with a realistic assessment of GPT-3’s tendency to produce biased outputs. As amended, the Model Rules of Professional Conduct acknowledge the impact of technology on the profession and provide some guard rails for its use by lawyers. This Article is the first to apply the current guidance to GPT-3, concluding that it is inadequate. I examine three specific Model Rules — Rule 1.1 (Competence), Rule 5.3 (Supervision of Nonlawyer Assistance), and Rule 8.4(g) (Bias) — and propose amendments that focus lawyers on their duties and require them to regularly educate themselves about pros and cons of using AI to ensure the ethical use of this emerging technology

    Ethical tensions in artificial intelligence : A conceptual analysis

    Get PDF
    The rapid development of artificial intelligence technologies has provoked intense political and scholarly debate on ethics of artificial intelligence. Due to underdeveloped technological framework the debate has stagnated, which causes difficulties for regulation that is essential for wider adoption of new technologies. The main problem of current ethical discussion on artificial intelligence is its blurriness as used terminology is not exact. In information sciences data, information, knowledge, intelligence and wisdom are recognized as distinct concepts, but this is disregarded in current AI ethics discussion. Another deficit of the ethical discussion is that ethics is seen as the good or the right, which turns the focus out from ethics as decision process between conflicting interests. This study is conceptual analysis, where the ethical discussion on artificial intelligence is analyzed in technological framework. In this thesis I propose that discussion artificial intelligence could be restructured to technological framework consisting of data, information and artificial intelligence. Parsing the discussion through technological framework would be useful in understanding of the wider picture, but also it might have practical implications by making concepts more transparent and thus help in creating regulation for artificial intelligence.Tekoälyn nopea kehitys on synnyttänyt voimakasta poliittista ja tieteellistä keskustelua tekoälyn etiikasta. Teknologisen viitekehyksen puutteellisen kehityksen vuoksi keskustelun edistyminen on kuitenkin pysähtynyt, mikä vaikeuttaa tekoälyn sääntelyä, joka puolestaan on välttämätöntä tekoälyteknologioiden laajemman käyttöönoton näkökulmasta. Tekoälykeskustelun suurin ongelma tällä hetkellä on sekavuus, sillä termejä käytetään epätarkasti. Tietojenkäsittelytieteissä data, informaatio, tieto, äly ja viisaus on tunnistettu omiksi käsitteikseen, mutta nykyisessä tekoälyn etiikkaa koskevassa keskustelussa tätä seikkaa ei huomioida. Lisäksi tekoälyn etiikkaa koskeva keskustelu on puutteellista, koska etiikka nähdään vain hyvänä tai oikeana, mikä kääntää huomion pois siitä, että etiikka pohjimmiltaan on ristiriitaisten intressien välillä tehtäviä valintoja. Tässä käsiteanalyyttisessa tutkimuksessa eettistä keskustelua ja siinä käytettäviä termejä tarkastellaan teknologisessa viitekehyksessä. Tutkielmassani esitän, että tekoälyn etiikkaa koskeva keskustelu voidaan jäsentää teknologiseen viitekehykseen, joka koostuu datasta, informaatiosta ja tekoälystä. Eettisen keskustelun jäsentäminen teknologisen viitekehyksen mukaisesti olisi hyödyllistä suuren kuvan hahmottamisessa. Käsitteiden läpinäkyvyyden lisäämisellä voi olla myös käytännön vaikutuksia, sillä ymmärrettävyys edistää tekoälyn sääntelyn syntyä
    corecore