258 research outputs found

    Artificial Stupidity

    Full text link
    Artificial intelligence is everywhere. And yet, the experts tell us, it is not yet actually anywhere. This is because we are yet to achieve artificial general intelligence, or artificially intelligent systems that are capable of thinking for themselves and adapting to their circumstances. Instead, all the AI hype—and it is constant—concerns narrower, weaker forms of artificial intelligence, which are confined to performing specific, narrow tasks. The promise of true artificial general intelligence thus remains elusive. Artificial stupidity reigns supreme. What is the best set of policies to achieve more general, stronger forms of artificial intelligence? Surprisingly, scholars have paid little attention to this question. Scholars have spent considerable time assessing a number of important legal questions relating to artificial intelligence, including privacy, bias, tort, and intellectual property issues. But little effort has been devoted to exploring what set of policies is best suited to helping artificial intelligence developers achieve greater levels of innovation. And examining such issues is not some niche exercise, because artificial intelligence has already or soon will affect every sector of society. Hence, the question goes to the heart of future technological innovation policy more broadly. This Article examines this question by exploring how well intellectual property rights promote innovation in artificial intelligence. I focus on intellectual property rights because they are often viewed as the most important piece of United States innovation policy. Overall, I argue that intellectual property rights, particularly patents, are ill-suited to promote more radical forms of artificial intelligence innovation. And even the intellectual property types that are a better fit for artificial intelligence innovators, such as trade secrecy, come with problems of their own. In fact, the poor fit of patents in particular may contribute to heavy industry consolidation in the AI field, and heavy consolidation in an industry is typically associated with lower than ideal levels of innovation. I conclude by arguing, however, that neither strengthening AI patent rights nor looking to other forms of law, such as antitrust, holds much promise in achieving more general forms of artificial intelligence. Instead, as with many earlier radical innovations, significant government backing, coupled with an engaged entrepreneurial sector, is at least one key to avoiding enduring artificial stupidity

    Artificial Stupidity

    Get PDF
    Public debate about AI is dominated by Frankenstein Syndrome, the fear that AI will become superhuman and escape human control. Although superintelligence is certainly a possibility, the interest it excites can distract the public from a more imminent concern: the rise of Artificial Stupidity (AS). This article discusses the roots of Frankenstein Syndrome in Mary Shelley’s famous novel of 1818. It then provides a philosophical framework for analysing the stupidity of artificial agents, demonstrating that modern intelligent systems can be seen to suffer from ‘stupidity of judgement’. Finally it identifies an alternative literary tradition that exposes the perils and benefits of AS. In the writings of Edmund Spenser, Jonathan Swift and E.T.A. Hoffmann, ASs replace, enslave or delude their human users. More optimistically, Joseph Furphy and Laurence Sterne imagine ASs that can serve human intellect as maps or as pipes. These writers provide a strong counternarrative to the myths that currently drive the AI debate. They identify ways in which even stupid artificial agents can evade human control, for instance by appealing to stereotypes or distancing us from reality. And they underscore the continuing importance of the literary imagination in an increasingly automated society

    Natural and Artificial Stupidity

    Get PDF

    Artificial Stupidity: A Reply

    Get PDF
    Murphy, Koehler, and Fogler [1997] gave in the last issue of the Journal of Portfolio Management an account of how to raise a neural net’s IQ. The purpose of this reply is to point out some of the general difficulties with neural nets. Also, I would like to mention an alternative method, namely Pade approximants, which does not suffer from these difficulties.Artifical; Stupidity; Neural Networks

    The Pharmacological Significance of Mechanical Intelligence and Artificial Stupidity

    Get PDF
    By drawing on the philosophy of Bernard Stiegler, the phenomena of mechanical (a.k.a. artificial, digital, or electronic) intelligence is explored in terms of its real significance as an ever-repeating threat of the reemergence of stupidity (as cowardice), which can be transformed into knowledge (pharmacological analysis of poisons and remedies) by practices of care, through the outlook of what researchers describe equivocally as “artificial stupidity”, which has been identified as a new direction in the future of computer science and machine problem solving as well as a new difficulty to be overcome. I weave together of web of “artificial stupidity”, which denotes the mechanic (1), the human (2), or the global (3). With regards to machine intelligence, artificial stupidity refers to: 1a) Weak A.I. or a rhetorical inversion of designating contemporary practices of narrow task-based procedures by algorithms in opposition to “True A.I.”; 1b) the restriction or employment of constraints that weaken the effectiveness of A.I., which is to say a “dumbing-down” of A.I. by intentionally introducing mistakes by programmers for safety concerns and human interaction purposes; 1c) the failure of machines to perform designated tasks; 1d) a lack of a noetic capacity, which is a lack of moral and ethical discretion; 1e) a lack of causal reasoning (true intelligence) as opposed to statistical associative “curve fitting”; or 2) the phenomenon of increasing human “stupidity” or drive-based behaviors, which is considered as the degradation of human intelligence and/or “intelligent human behavior” through technics; and finally, 3) the global phenomenon of increasing entropy due to a black-box economy of closed systems and/or industry consolidation

    Artificial Stupidity: A Reply

    Get PDF
    Murphy, Koehler, and Fogler [1997] gave in the last issue of the Journal of Portfolio Management an account of how to raise a neural net’s IQ. The purpose of this reply is to point out some of the general difficulties with neural nets. Also, I would like to mention an alternative method, namely Pade approximants, which does not suffer from these difficulties

    Natural intelligence or artificial stupidity ?

    Full text link
    « If liberty means anything at all, it means the right to tell people what they do not want to hear » (George Orwell). S’il est exact qu’un problĂšme bien posĂ© est Ă  moitiĂ© rĂ©solu (Henri PoincarĂ©), il est plus vrai encore qu’un problĂšme prĂ©cisĂ©ment dĂ©fini cesse d’en ĂȘtre un. De surcroĂźt, quand on a Ă©liminĂ© tout ce qui est impossible, ce qui reste, si improbable soit-il, est inĂ©vitablement la solution (Arthur Conan Doyle). Ainsi, une crise d’hystĂ©rie collective, telle que La Grande Panique de 2020, trouve naturellement son terme lorsque le substrat – concepts, modĂšles, thĂ©ories – sur lequel elle repose, s’épuise. Ces objets intellectuels ne sont en fait que des croyances, des « actes de foi », dont il est toujours difficile de se dĂ©barrasser. « It is difficult to get a man to understand something, when his salary depends on his not understanding it
 » (Upton Sinclair). Le Principe de RĂ©alitĂ© nous invite Ă  admettre l'existence d'une rĂ©alitĂ© supĂ©rieure, dĂ©stabilisatrice, non conforme Ă  son idĂ©alisation. Ainsi, le XXIĂšme siĂšcle pourrait marquer le dĂ©but d’une Ăšre nouvelle, centrĂ©e sur le retrait Ă©thique et la transcendance. Intelligence naturelle ou stupiditĂ© artificielle ? « If you don't understand that, you're in the wrong business. » (Joe Biden

    Artificial Stupidity: Data We Need to Make Machines Our Equals

    Get PDF
    AI must understand human limitations to provide good service and safe interactions. Standardized data on human limits would be valuable in many domains but is not available. The data science community has to work on collecting and aggregating such data in a common and widely available format, so that any AI researcher can easily look up the applicable limit measurements for their latest project. AI must understand human limitations to provide good service and safe interactions. Standardized data on human limits would be valuable in many domains but is not available. Data science community has to work on collecting and aggregating such data in a common and widely available format, so any AI researcher can easily look up applicable limit measurements for their latest project
    • 

    corecore