1,960 research outputs found

    AI systems are dumb because AI researchers are too clever

    No full text

    SAFETY OF ARTIFICIAL SUPERINTELLIGENCE

    Get PDF
    The paper analyses an important problem of cyber security from human safety perspective which is usually described as data and/or computer safety itself without mentioning the human. There are numerous scientific predictions of creation of artificial superintelligence, which could arise in the near future. That is why the strong necessity for protection of such a system from causing any farm arises. This paper reviews approaches and methods already presented for solving this problem in a single article, analyses its results and provides future research directions

    AI Researchers, Video Games Are Your Friends!

    Full text link
    If you are an artificial intelligence researcher, you should look to video games as ideal testbeds for the work you do. If you are a video game developer, you should look to AI for the technology that makes completely new types of games possible. This chapter lays out the case for both of these propositions. It asks the question "what can video games do for AI", and discusses how in particular general video game playing is the ideal testbed for artificial general intelligence research. It then asks the question "what can AI do for video games", and lays out a vision for what video games might look like if we had significantly more advanced AI at our disposal. The chapter is based on my keynote at IJCCI 2015, and is written in an attempt to be accessible to a broad audience.Comment: in Studies in Computational Intelligence Studies in Computational Intelligence, Volume 669 2017. Springe

    Provably safe systems: the only path to controllable AGI

    Full text link
    We describe a path to humanity safely thriving with powerful Artificial General Intelligences (AGIs) by building them to provably satisfy human-specified requirements. We argue that this will soon be technically feasible using advanced AI for formal verification and mechanistic interpretability. We further argue that it is the only path which guarantees safe controlled AGI. We end with a list of challenge problems whose solution would contribute to this positive outcome and invite readers to join in this work.Comment: 17 page

    The Cord Weekly (August, 1991)

    Get PDF

    The Nexus between Artificial Intelligence and Economics

    Get PDF
    This book is organized as follows. Section 2 introduces the notion of the Singularity, a stage in development in which technological progress and economic growth increase at a near-infinite rate. Section 3 describes what artificial intelligence is and how it has been applied. Section 4 considers artificial happiness and the likelihood that artificial intelligence might increase human happiness. Section 5 discusses some prominent related concepts and issues. Section 6 describes the use of artificial agents in economic modeling, and section 7 considers some ways in which economic analysis can offer some hints about what the advent of artificial intelligence might bring. Chapter 8 presents some thoughts about the current state of AI and its future prospects.

    The Black Box of AI

    Get PDF
    Στην παρούσα διπλωματική εργασία, χρησιμοποιείται η μεταφορά του black box για να γίνει κριτική ανάλυση της τεχνητής νοημοσύνης (ΤΝ). Αυτή η μεταφορά αποτελεί ένα χρήσιμο μεθοδολογικό εργαλείο από το χώρο του STS. Ξεκινώντας με μια επισκόπηση της δευτερογενούς βιβλιογραφίας και ορισμένων φιλοσοφικών ζητημάτων που σχετίζονται με την ΤΝ και την έννοια του black box, γίνεται η μετάβαση στην ανάλυση πρωτογενών δεδομένων που λαμβάνονται από τα επιστημονικά περιοδικά Nature και Scientific American. Κύριος σκοπός είναι να αναδειχθεί η κατασκευή του black box της ΤΝ, να παρουσιαστούν οι ποικίλες μορφές του και να διαφανούν οι τρόποι με τους οποίους μπορεί κανείς να ανοίξει αυτό το «μαύρο κουτί». Σε μια προσπάθεια να απαντηθούν βασικά ερευνητικά ερωτήματα σχετικά με το black box της ΤΝ, αποκαλύπτεται η κατασκευή του μέσω μιας συμπαραγωγής τεχνοεπιστημονικών και κοινωνικών παραγόντων, ενώ ταυτόχρονα αναγνωρίζονται οι τρόποι με τους οποίους μπορεί κανείς να το ανοίξει. Η κατανόηση της κατασκευής του black box αποτελεί το πρώτο βήμα σε αυτή την προσπάθεια. Οι επιπτώσεις που έχει η ΤΝ (ιδιαίτερα το black box που σχετίζεται με αυτήν) στην κοινωνία δημιουργούν την ανάγκη για διαφάνεια προκειμένου να μετριαστούν οι αρνητικές επιπτώσεις που προκύπτουν από τις διαδικασίες blackboxing. Κατ’ ουσίαν, η εμπιστοσύνη στην ΤΝ αποτελεί συνάρτηση της αντίληψης περί διαφάνειας των εργασιών της τελευταίας. Για να αυξηθεί η εμπιστοσύνη στην ΤΝ, κάτι δύσκολο αλλά απαραίτητο, οφείλει κανείς να αναδείξει την περιπλοκότητα της ΤΝ, ιδωμένης στο σύνολό της ως τεχνοκοινωνικό φαινόμενο. Στο κείμενο που ακολουθεί γίνεται μια τέτοιου είδους προσπάθεια. Τέλος, σε αυτή τη διπλωματική διερευνάται η σχέση μεταξύ βιολογικού και μηχανικού μέσω της μεταφοράς του black box και της έρευνας σχετικά με την ΤΝ.In this study, AI is critically analyzed through the lens of the black box metaphor, a useful methodological tool within the field of science and technology studies. Beginning with an overview of the secondary literature and some philosophical issues related to AI and the black box concept, this thesis proceeds to an analysis of primary data taken from the scientific journals Nature and Scientific American. The main purpose of this work is to show how the black box of AI is constructed, to present the nuances related to it, how to unpack it and, in general, to present the full extent to which one can use the black box metaphor in order to reflect on AI. Attempting to answer basic research questions about AI’s black box, its construction through a co-production of technoscientific and social factors is revealed along with ways to pry open its black box. Understanding the construction of the black box is the first step in order to start opening it. Its implications for society lead to the mandate for transparency in order to mitigate the adverse effects stemming from blackboxing procedures. Contingent on the black box of AI and the transparency mandate is trust in AI. Increasing trust in AI presents a difficult but necessary task as AI is, ultimately, a technosocial phenomenon that both shapes and is shaped by society. Finally, in this thesis the relationship between the biological and the mechanical domain by means of the black box metaphor and AI research is evaluated

    You Might be a Robot

    Get PDF
    As robots and artificial intelligence (Al) increase their influence over society, policymakers are increasingly regulating them. But to regulate these technologies, we first need to know what they are. And here we come to a problem. No one has been able to offer a decent definition of robots arid AI-not even experts. What\u27s more, technological advances make it harder and harder each day to tell people from robots and robots from dumb machines. We have already seen disastrous legal definitions written with one target in mind inadvertently affecting others. In fact, if you are reading this you are (probably) not a robot, but certain laws might already treat you as one. Definitional challenges like these aren\u27t exclusive to robots and Al. But today, all signs indicate we are approaching an inflection point. Whether it is citywide bans of robot sex brothels or nationwide efforts to crack down on ticket scalping bots, we are witnessing an explosion of interest in regulating robots, human enhancement technologies, and all things in between. And that, in turn, means that typological quandaries once confined to philosophy seminars can no longer be dismissed as academic. Want, for example, to crack down on foreign influence campaigns by regulating social media bots? Be careful not to define bot too broadly (like the Calfornia legislature recently did), or the supercomputer nestled in your pocket might just make you one. Want, instead, to promote traffic safety by regulating drivers? Be careful not to presume that only humans can drive (as our Federal Motor Vehicle Safety Standards do), or you may soon exclude the best drivers on the road. In this Article, we suggest that the problem isn\u27t simply that we haven\u27t hit upon the right definition. Instead, there may not be a right definition for the multifaceted, rapidly evolving technologies we call robots or AI. As we will demonstrate, even the most thoughtful of definitions risk being overbroad, underinclusive, or simply irrelevant in short order. Rather than trying in vain to find the perfect definition, we instead argue that policymakers should do as the great computer scientist, Alan Turing, did when confronted with the challenge of defining robots: embrace their ineffable nature. We offer several strategies to do so. First, whenever possible, laws should regulate behavior, not things (or as we put it, regulate verbs, not nouns). Second, where we must distinguish robots from other entities, the law should apply what we call Turing\u27s Razor, identifying robots on a case-by-case basis. Third, we offer six functional criteria for making these types of I know it when I see it determinations and argue that courts are generally better positioned than legislators to apply such standards. Finally, we argue that if we must have definitions rather than apply standards, they should be as short-term and contingent as possible. That, in turn, suggests that regulators-not legislators-should play the defining role
    corecore