258 research outputs found
Artificial Stupidity
Artificial intelligence is everywhere. And yet, the experts tell us, it is not yet actually anywhere. This is because we are yet to achieve artificial general intelligence, or artificially intelligent systems that are capable of thinking for themselves and adapting to their circumstances. Instead, all the AI hypeâand it is constantâconcerns narrower, weaker forms of artificial intelligence, which are confined to performing specific, narrow tasks. The promise of true artificial general intelligence thus remains elusive. Artificial stupidity reigns supreme.
What is the best set of policies to achieve more general, stronger forms of artificial intelligence? Surprisingly, scholars have paid little attention to this question. Scholars have spent considerable time assessing a number of important legal questions relating to artificial intelligence, including privacy, bias, tort, and intellectual property issues. But little effort has been devoted to exploring what set of policies is best suited to helping artificial intelligence developers achieve greater levels of innovation. And examining such issues is not some niche exercise, because artificial intelligence has already or soon will affect every sector of society. Hence, the question goes to the heart of future technological innovation policy more broadly.
This Article examines this question by exploring how well intellectual property rights promote innovation in artificial intelligence. I focus on intellectual property rights because they are often viewed as the most important piece of United States innovation policy. Overall, I argue that intellectual property rights, particularly patents, are ill-suited to promote more radical forms of artificial intelligence innovation. And even the intellectual property types that are a better fit for artificial intelligence innovators, such as trade secrecy, come with problems of their own. In fact, the poor fit of patents in particular may contribute to heavy industry consolidation in the AI field, and heavy consolidation in an industry is typically associated with lower than ideal levels of innovation.
I conclude by arguing, however, that neither strengthening AI patent rights nor looking to other forms of law, such as antitrust, holds much promise in achieving more general forms of artificial intelligence. Instead, as with many earlier radical innovations, significant government backing, coupled with an engaged entrepreneurial sector, is at least one key to avoiding enduring artificial stupidity
Artificial Stupidity
Public debate about AI is dominated by Frankenstein Syndrome, the fear that AI will become superhuman and escape human control. Although superintelligence is certainly a possibility, the interest it excites can distract the public from a more imminent concern: the rise of Artificial Stupidity (AS). This article discusses the roots of Frankenstein Syndrome in Mary Shelleyâs famous novel of 1818. It then provides a philosophical framework for analysing the stupidity of artificial agents, demonstrating that modern intelligent systems can be seen to suffer from âstupidity of judgementâ. Finally it identifies an alternative literary tradition that exposes the perils and benefits of AS. In the writings of Edmund Spenser, Jonathan Swift and E.T.A. Hoffmann, ASs replace, enslave or delude their human users. More optimistically, Joseph Furphy and Laurence Sterne imagine ASs that can serve human intellect as maps or as pipes. These writers provide a strong counternarrative to the myths that currently drive the AI debate. They identify ways in which even stupid artificial agents can evade human control, for instance by appealing to stereotypes or distancing us from reality. And they underscore the continuing importance of the literary imagination in an increasingly automated society
Artificial Stupidity: A Reply
Murphy, Koehler, and Fogler [1997] gave in the last issue of the Journal of Portfolio Management an account of how to raise a neural netâs IQ. The purpose of this reply is to point out some of the general difficulties with neural nets. Also, I would like to mention an alternative method, namely Pade approximants, which does not suffer from these difficulties.Artifical; Stupidity; Neural Networks
The Pharmacological Significance of Mechanical Intelligence and Artificial Stupidity
By drawing on the philosophy of Bernard Stiegler, the phenomena of mechanical (a.k.a. artificial, digital, or electronic) intelligence is explored in terms of its real significance as an ever-repeating threat of the reemergence of stupidity (as cowardice), which can be transformed into knowledge (pharmacological analysis of poisons and remedies) by practices of care, through the outlook of what researchers describe equivocally as âartificial stupidityâ, which has been identified as a new direction in the future of computer science and machine problem solving as well as a new difficulty to be overcome. I weave together of web of âartificial stupidityâ, which denotes the mechanic (1), the human (2), or the global (3). With regards to machine intelligence, artificial stupidity refers to: 1a) Weak A.I. or a rhetorical inversion of designating contemporary practices of narrow task-based procedures by algorithms in opposition to âTrue A.I.â; 1b) the restriction or employment of constraints that weaken the effectiveness of A.I., which is to say a âdumbing-downâ of A.I. by intentionally introducing mistakes by programmers for safety concerns and human interaction purposes; 1c) the failure of machines to perform designated tasks; 1d) a lack of a noetic capacity, which is a lack of moral and ethical discretion; 1e) a lack of causal reasoning (true intelligence) as opposed to statistical associative âcurve fittingâ; or 2) the phenomenon of increasing human âstupidityâ or drive-based behaviors, which is considered as the degradation of human intelligence and/or âintelligent human behaviorâ through technics; and finally, 3) the global phenomenon of increasing entropy due to a black-box economy of closed systems and/or industry consolidation
Artificial Stupidity: A Reply
Murphy, Koehler, and Fogler [1997] gave in the last
issue of the Journal of Portfolio Management an account of
how to raise a neural netâs IQ. The purpose of this reply is
to point out some of the general difficulties with neural nets.
Also, I would like to mention an alternative method, namely
Pade approximants, which does not suffer from these
difficulties
Recommended from our members
Art as 'artificial stupidity'
Through treatment of selected interventions and artworks, the thesis investigates relationships
between cybernetics, conceptions of intelligence and artistic practice. The works in question are
primarily the artistâs own, documented in the thesis and a separate portfolio. Specifically, intelligenceâs
downside, the controversial notion of stupidity, has been reappropriated as a means of considering the
way artists intervene and how art, as a system, functions.
The term âartificial stupidityâ was invented in reaction to a particular construal of what Artificial
Intelligence (AI) meant. The notion has been employed since, and the thesis discusses interpretations
and uses of it. One meaning relates to an ability to become, or make oneself, âstupidâ in order to
facilitate discovery. In the conclusions, the arguments are extended to âart as a social systemâ (Niklas
Luhmann), suggesting that it survives and reproduces through a wily kind of pretend idiocy combined
with occasional acts of generosity to other systems.
The research methodology is threefold. Firstly, unapologetically playful approaches,
characteristic of the artistic process, were utilised to generate ideas. Thus, art becomes primary research;
an equivalent to experimentation. Secondly conventional secondary research; the study of texts; was
conducted alongside artistic production. Thirdly the works themselves are treated as raw materials to
be discussed and written about as a means of developing arguments.
Work was selected on the basis of the weight it carries within the authorâs practice (in terms of
time, effort and resources devoted) and because of its relevance to the thesis themes i.e. contemporary
and post-conceptual art, the science of feedback loops and critiquing intelligence and AI. The second
chapter divides interventions and outputs into three categories. Firstly, the short looping films termed
âsimupoemsâ, which have been a consistent feature of the practice, are given attention. Then live art, in
which a professional clown was often employed, is considered. Lastly a series of interactions with the
everyday technological landscape is discussed. One implication, in mapping out this trajectory, is that
the clownâs skills have been appropriated. âArtificial stupidityâ permits parking contravention images to
be mistaken for art photography, for beauty to be found in courier company point-of-delivery
signatures and for the use of supermarket self-checkout machines, but to buy nothing.
The nature of the writing in chapter 2 and appendix A (which was a precursor for the approach)
is discursive. Works are reviewed and speculations made about the relationship with key themes. The
activities of artists like Glenn Lygon, Sophie Calle, Samuel Beckett are drawn upon as well as
contemporary groupings Common Culture (David Campbell and Mark Durden) and Hunt and Darton
(Jenny Hunt and Holly Darton). Chapter 3 includes a more structured breakdown and taxonomy of
methods. Art theories of relevance including the ideas of Niklas Luhmann already mentioned, John
Roberts, Avital Ronell, Mikhail Bakhtin, Andrew Pickering and Claire Bishop are called upon
throughout the thesis.
Interrogation of the work raises certain ethical or political questions. If there are good reasons for
the unacceptability of âstupidâ when applied to other human beings, might it be reasonable to be
disparaging about the apparent intellectual capacities of technologies, processes and systems?
The period of PhD research provided an opportunity for the relationship between the artistâs
activities and the techo-industrial landscape to be articulated. The body of work and thesis constitutes a
contribution to knowledge on two key fronts. Firstly, the art works themselves, though precedents exist,
are original and have been endorsed as such by a wider community. Secondly the link between systems
and engineering concepts, and performance-oriented artistic practice is an unusual one, and, as a result,
it has been possible to draw conclusions which are pertinent to technological spheres, computational
capitalism and systems thinking, as well as art
Natural intelligence or artificial stupidity ?
« If liberty means anything at all, it means the right to tell people what they do not want to hear » (George Orwell). Sâil est exact quâun problĂšme bien posĂ© est Ă moitiĂ© rĂ©solu (Henri PoincarĂ©), il est plus vrai encore quâun problĂšme prĂ©cisĂ©ment dĂ©fini cesse dâen ĂȘtre un. De surcroĂźt, quand on a Ă©liminĂ© tout ce qui est impossible, ce qui reste, si improbable soit-il, est inĂ©vitablement la solution (Arthur Conan Doyle). Ainsi, une crise dâhystĂ©rie collective, telle que La Grande Panique de 2020, trouve naturellement son terme lorsque le substrat â concepts, modĂšles, thĂ©ories â sur lequel elle repose, sâĂ©puise. Ces objets intellectuels ne sont en fait que des croyances, des « actes de foi », dont il est toujours difficile de se dĂ©barrasser. « It is difficult to get a man to understand something, when his salary depends on his not understanding it⊠» (Upton Sinclair). Le Principe de RĂ©alitĂ© nous invite Ă admettre l'existence d'une rĂ©alitĂ© supĂ©rieure, dĂ©stabilisatrice, non conforme Ă son idĂ©alisation. Ainsi, le XXIĂšme siĂšcle pourrait marquer le dĂ©but dâune Ăšre nouvelle, centrĂ©e sur le retrait Ă©thique et la transcendance. Intelligence naturelle ou stupiditĂ© artificielle ? « If you don't understand that, you're in the wrong business. » (Joe Biden
Artificial Stupidity: Data We Need to Make Machines Our Equals
AI must understand human limitations to provide good service and safe interactions. Standardized data on human limits would be valuable in many domains but is not available. The data science community has to work on collecting and aggregating such data in a common and widely available format, so that any AI researcher can easily look up the applicable limit measurements for their latest project. AI must understand human limitations to provide good service and safe interactions. Standardized data on human limits would be valuable in many domains but is not available. Data science community has to work on collecting and aggregating such data in a common and widely available format, so any AI researcher can easily look up applicable limit measurements for their latest project
- âŠ