226 research outputs found

    Alan Turing: father of the modern computer

    Get PDF

    The Abject in the Technological Other

    Get PDF
    A photographic project and exegesis demonstrating artificial intelligence and artificial life as occurrences of the abject. In the photographic project \u270=2\u27, the technologies of artificial intelligence and artificial life are examined in relation to identity, via an elaboration of the psychoanalytic concert of the \u27abject\u27. An exegesis of the creative project contains an investigation of computer technologies in regard to identity, an analysis of the basic concepts and paradigms of the sciences of artificial intelligence and artificial life, and an elaboration of the psychoanalytic concept of the abject - demonstrating A.l. and a-life as cultural instantiations of abjection. In addition, an examination of the creative work provides a further amplification of these analyses. Both the creative work and the companion exegesis will contribute to cybercultural theory and arts practice, providing a psychoanalytic understanding of these scientific technologies

    Turing on the Integration of Human and Machine Intelligence

    Get PDF
    Philosophical discussion of Alan Turing’s writings on intelligence has mostly revolved around a single point made in a paper published in the journal Mind in 1950. This is unfortunate, for Turing’s reflections on machine (artificial) intelligence, human intelligence, and the relation between them were more extensive and sophisticated. They are seen to be extremely well-considered and sound in retrospect. Recently, IBM developed a question-answering computer (Watson) that could compete against humans on the game show Jeopardy! There are hopes it can be adapted to other contexts besides that game show, in the role of a collaborator of, rather than a competitor to, humans. Another, different, research project—an artificial intelligence program put into operation in 2010—is the machine learning program NELL (Never Ending Language Learning), which continuously ‘learns’ by ‘reading’ massive amounts of material on millions of web pages. Both of these recent endeavors in artificial intelligence rely to some extent on the integration of human guidance and feedback at various points in the machine’s learning process. In this paper, I examine Turing’s remarks on the development of intelligence used in various kinds of search, in light of the experience gained to date on these projects

    The Carroll News- Vol. 76, No. 3

    Get PDF

    You Might be a Robot

    Get PDF
    As robots and artificial intelligence (Al) increase their influence over society, policymakers are increasingly regulating them. But to regulate these technologies, we first need to know what they are. And here we come to a problem. No one has been able to offer a decent definition of robots arid AI-not even experts. What\u27s more, technological advances make it harder and harder each day to tell people from robots and robots from dumb machines. We have already seen disastrous legal definitions written with one target in mind inadvertently affecting others. In fact, if you are reading this you are (probably) not a robot, but certain laws might already treat you as one. Definitional challenges like these aren\u27t exclusive to robots and Al. But today, all signs indicate we are approaching an inflection point. Whether it is citywide bans of robot sex brothels or nationwide efforts to crack down on ticket scalping bots, we are witnessing an explosion of interest in regulating robots, human enhancement technologies, and all things in between. And that, in turn, means that typological quandaries once confined to philosophy seminars can no longer be dismissed as academic. Want, for example, to crack down on foreign influence campaigns by regulating social media bots? Be careful not to define bot too broadly (like the Calfornia legislature recently did), or the supercomputer nestled in your pocket might just make you one. Want, instead, to promote traffic safety by regulating drivers? Be careful not to presume that only humans can drive (as our Federal Motor Vehicle Safety Standards do), or you may soon exclude the best drivers on the road. In this Article, we suggest that the problem isn\u27t simply that we haven\u27t hit upon the right definition. Instead, there may not be a right definition for the multifaceted, rapidly evolving technologies we call robots or AI. As we will demonstrate, even the most thoughtful of definitions risk being overbroad, underinclusive, or simply irrelevant in short order. Rather than trying in vain to find the perfect definition, we instead argue that policymakers should do as the great computer scientist, Alan Turing, did when confronted with the challenge of defining robots: embrace their ineffable nature. We offer several strategies to do so. First, whenever possible, laws should regulate behavior, not things (or as we put it, regulate verbs, not nouns). Second, where we must distinguish robots from other entities, the law should apply what we call Turing\u27s Razor, identifying robots on a case-by-case basis. Third, we offer six functional criteria for making these types of I know it when I see it determinations and argue that courts are generally better positioned than legislators to apply such standards. Finally, we argue that if we must have definitions rather than apply standards, they should be as short-term and contingent as possible. That, in turn, suggests that regulators-not legislators-should play the defining role

    The social implications of artificial intelligence.

    Get PDF
    For 18 years. I have been publishing books and papers on the subject of the social implications of Artificial Intelligence (AI). This is an area which is has been, and remains, in need of more academic attention of a serious nature than it currently receives. It will be useful to attempt a working definition of the field of AI at this stage. There is a considerable amount of disagreement as to what does and does not constitute AI and this often has important consequences for discussions of the social implications of the field. In brief, I define AI as the study of intelligent behaviour (in humans, animals, and machines) and the attempt to find ways in which such behaviour could be engineered in any type of artefact. My position on the definition of AI as a field of activity is set out in full in various places in the works submitted for this application. Most important are Chapter 2 of Whitby, 1988b, Chapter 3 of Whitby, 1996, and Whitby, 2000. This definition is distinctive (though not unique). For the purposes of discussion of social implications, its most distinctive feature is that it does not require the imitation or replication of human intellectual attributes. Because, under this definition, AI is not limited to the study of and attempt to build human-like intelligence the discussion of its social implications is rendered much broader. Also, because AÏ encompasses the attempt to engineer intelligent behaviour in any type of artefact, discussion of its social implications will need to consider the way in which AI technology, methods, and attitudes can permeate other different areas. This will include a wide range of technologies which include an AI element and a wide range of disciplines which are influenced by AI ideas. Thus the social implications of AI are turned into an immensely important field of study, since AI technology will steadily continue to permeate other technologies and thereby society as a whole. Many of the social implications of this technological process are nonobvious and surprising. If we are to make sensible, timely, and practical policy decisions and legislation then it is important to be as clear as possible about likely technological developments and their social implications. We may initially attempt to characterise various approaches by other authorities on the social implications of AI. Thèse range from the wildly spéculative such as Warwick (1988) and Moravec (1988) to the mainly technical, for example Michie (1986). At the wildly spéculative end of this continuum represented by Professor Warwick there are scare stories involving robots taking over the earth. (See for example Warwick, 1998 pp. 21-38) At the other end of the continuum, there are writers who often see AI as entirely positive, or as having no social implications at all. Most authorities will, or at least should, occupy a position somewhere between these extremes. However, in giving serious académie considération to this area, one needs to respond to this entire range of approaches. That is to say that one must (as a minimum) both be conversant with probable technical developments and also carefully and critically respond to speculations about the nature of future society. In my research I have consistently attempted to do just this. This is obviously a cross-disciplinary exercise and the differing methodologies of different disciplines présent further problems in determining the best (or an approximation to the best) approach. For a number of reasons, which will be fully explored in this statement, my approach has concentrated (mainly, though not exclusively) on the attempt to provide guidance to those actually concerned with the technical and scientific development of AI. The published books and papers submitted as part of this application span a period of 16 years. Thèse works form a cohérent body of research around the area of the social implications of AI. This body both develops the theme of the need for professionalism in AI and answers the criticisms of other writers in the area. They involve a full response to other writers in this area, over the entire continuum described above. This is a large, coherent, important, and generally well-regarded body of work which is in every relevant sense equivalent to that required for a PhD. by research

    In Homage of Change

    Get PDF
    corecore