29 research outputs found

    Toward a general logicist methodology for engineering ethically correct robots,”

    Get PDF
    Abstract It is hard to deny that robots will become increasingly capable, and that humans will increasingly exploit this capability by deploying them in ethically sensitive environments; i.e., in environments (e.g., hospitals) where ethically incorrect behavior on the part of robots could have dire effects on humans. But then how will we ensure that the robots in question always behave in an ethically correct manner? How can we know ahead of time, via rationales expressed in clear English (and/or other so-called natural languages), that they will so behave? How can we know in advance that their behavior will be constrained specifically by the ethical codes selected by human overseers? In general, it seems clear that one reply worth considering, put in encapsulated form, is this one: "By insisting that our robots only perform actions that can be proved ethically permissible in a human-selected deontic logic." (A deontic logic is simply a logic that formalizes an ethical code.) This approach ought to be explored for a number of reasons. One is that ethicists themselves work by rendering ethical theories and dilemmas in declarative form, and by reasoning over this declarative information using informal and/or formal logic. Other reasons in favor of pursuing the logicist solution are presented in the paper itself. To illustrate the feasibility of our methodology, we describe it in general terms free of any committment to particular systems, and show it solving a challenge regarding robot behavior in an intensive care unit

    Taking Turing by Surprise? Designing Digital Computers for morally-loaded contexts

    Full text link
    There is much to learn from what Turing hastily dismissed as Lady Lovelace s objection. Digital computers can indeed surprise us. Just like a piece of art, algorithms can be designed in such a way as to lead us to question our understanding of the world, or our place within it. Some humans do lose the capacity to be surprised in that way. It might be fear, or it might be the comfort of ideological certainties. As lazy normative animals, we do need to be able to rely on authorities to simplify our reasoning: that is ok. Yet the growing sophistication of systems designed to free us from the constraints of normative engagement may take us past a point of no-return. What if, through lack of normative exercise, our moral muscles became so atrophied as to leave us unable to question our social practices? This paper makes two distinct normative claims: 1. Decision-support systems should be designed with a view to regularly jolting us out of our moral torpor. 2. Without the depth of habit to somatically anchor model certainty, a computer s experience of something new is very different from that which in humans gives rise to non-trivial surprises. This asymmetry has key repercussions when it comes to the shape of ethical agency in artificial moral agents. The worry is not just that they would be likely to leap morally ahead of us, unencumbered by habits. The main reason to doubt that the moral trajectories of humans v. autonomous systems might remain compatible stems from the asymmetry in the mechanisms underlying moral change. Whereas in humans surprises will continue to play an important role in waking us to the need for moral change, cognitive processes will rule when it comes to machines. This asymmetry will translate into increasingly different moral outlooks, to the point of likely unintelligibility. The latter prospect is enough to doubt the desirability of autonomous moral agents

    Philosophy and Electronic Publishing

    Get PDF

    The Human Brain

    Get PDF

    Towards A Measure Of General Machine Intelligence

    Full text link
    To build general-purpose artificial intelligence systems that can deal with unknown variables across unknown domains, we need benchmarks that measure how well these systems perform on tasks they have never seen before. A prerequisite for this is a measure of a task's generalization difficulty, or how dissimilar it is from the system's prior knowledge and experience. If the skill of an intelligence system in a particular domain is defined as it's ability to consistently generate a set of instructions (or programs) to solve tasks in that domain, current benchmarks do not quantitatively measure the efficiency of acquiring new skills, making it possible to brute-force skill acquisition by training with unlimited amounts of data and compute power. With this in mind, we first propose a common language of instruction, a programming language that allows the expression of programs in the form of directed acyclic graphs across a wide variety of real-world domains and computing platforms. Using programs generated in this language, we demonstrate a match-based method to both score performance and calculate the generalization difficulty of any given set of tasks. We use these to define a numeric benchmark called the generalization index, or the g-index, to measure and compare the skill-acquisition efficiency of any intelligence system on a set of real-world tasks. Finally, we evaluate the suitability of some well-known models as general intelligence systems by calculating their g-index scores.Comment: 31 pages, 15 Figures, 3 Tables; Sample Data and g-index Reference Code at https://github.com/mayahq/g-index-benchmark; g-index toy environment at https://github.com/mayahq/flatland; version 2 added a section about the toy environment; version 3 compressed images to reduce file size; version 4 updated description of flatland toy environmen

    Turingův test: filozofické aspekty umělé inteligence

    Get PDF
    Disertační práce se zabývá problematikou připisování myšlení jiným entitám, a to pomocí imitační hry navržené v roce 1950 britským filosofem Alanem Turingem. Jeho kritérium, známé v dějinách filosofie jako Turingův test, je podrobeno detailní analýze. Práce popisuje nejen původní námitky samotného Turinga, ale především pozdější diskuse v druhé polovině 20. století. Největší pozornost je věnována těmto kritikám: Lucasova matematická námitka využívající Gödelovu větu o neúplnosti, Searlův argument čínského pokoje konstatující nedostatečnost syntaxe pro sémantiku, Blockův návrh na použití brutální síly pro řešení imitační hry, Frenchova teorie subkognitivních informací a Michieho skepticismus ohledně možnosti umělého vědomí. Závěr práce zachycuje současný stav recepce Turingova testu a představuje pokusy o jeho praktickou realizaci, například v každoroční soutěži o Loebnerovu cenu. Autor práce zastává názor, že ani po více než šedesáti letech od uveřejnění Turingova paradigmatického eseje stále neexistují žádné vážné důvody pro zamítnutí jeho tvrzení. Tradiční komputační funkcionalismus možná není ideální teorií vysvětlující činnost myslí a jako slibnější se může jevit vývoj v neurálních vědách, ale Turingův test je přesto užitečným a snad i jediným nástrojem pro detekci inteligence u lidmi vytvořených strojů

    Foundations of Trusted Autonomy

    Get PDF
    Trusted Autonomy; Automation Technology; Autonomous Systems; Self-Governance; Trusted Autonomous Systems; Design of Algorithms and Methodologie
    corecore