144 research outputs found

    Napad na Pascala

    Get PDF
    Gdzieś w ciemnej uliczce... Bandyta: Ej ty, dawaj portfel! Pascal: A niby dlaczego miałbym to zrobić? Bandyta: Bo w przeciwnym razie cię zastrzelę. Pascal: Ale przecież nie masz broni. Bandyta: A niech to! Wiedziałem, że zapomniałem o czymś. Pascal: No to zapomnij też o moim portfelu. Miłego wieczoru. Bandyta: Stój! Pascal: Co znowu? Bandyta: Jest interes do zrobienia... Co ty na to, żebyś jednak oddał mi portfel? W zamian obiecuję przyjść do ciebie jutro i dać ci dwukrotność kwoty, którą w nim masz. Nieźle, co? 200 procent zwrotu z inwestycji w 24 godziny

    Napad na Pascala

    Get PDF
    Gdzieś w ciemnej uliczce... Bandyta: Ej ty, dawaj portfel! Pascal: A niby dlaczego miałbym to zrobić? Bandyta: Bo w przeciwnym razie cię zastrzelę. Pascal: Ale przecież nie masz broni. Bandyta: A niech to! Wiedziałem, że zapomniałem o czymś. Pascal: No to zapomnij też o moim portfelu. Miłego wieczoru. Bandyta: Stój! Pascal: Co znowu? Bandyta: Jest interes do zrobienia... Co ty na to, żebyś jednak oddał mi portfel? W zamian obiecuję przyjść do ciebie jutro i dać ci dwukrotność kwoty, którą w nim masz. Nieźle, co? 200 procent zwrotu z inwestycji w 24 godziny

    Pascal’s mugging

    Get PDF
    Nick Bostrom (University of Oxford, UK), Napad na Pascala, przeł. Tomasz Żuradzki

    Future progress in artificial intelligence: A poll among experts

    Get PDF
    [This is the short version of: Müller, Vincent C. and Bostrom, Nick (forthcoming 2016), ‘Future progress in artificial intelligence: A survey of expert opinion’, in Vincent C. Müller (ed.), Fundamental Issues of Artificial Intelligence (Synthese Library 377; Berlin: Springer).] - - - In some quarters, there is intense concern about high–level machine intelligence and superintelligent AI coming up in a few dec- ades, bringing with it significant risks for human- ity; in other quarters, these issues are ignored or considered science fiction. We wanted to clarify what the distribution of opinions actually is, what probability the best experts currently assign to high–level machine intelligence coming up within a particular time–frame, which risks they see with that development and how fast they see these developing. We thus designed a brief questionnaire and distributed it to four groups of experts. Overall, the results show an agreement among experts that AI systems will probably reach overall human ability around 2040-2050 and move on to superintelligence in less than 30 years thereafter. The experts say the probability is about one in three that this development turns out to be ‘bad’ or ‘extremely bad’ for humanity

    Observational selection effects and probability.

    Get PDF
    This thesis develops a theory of how to reason when our evidence has been subjected to observational selection effects. It has applications in cosmology, evolutionary biology, thermodynamics and the problem of time's arrow, game theoretic problems with imperfect recall, the philosophical evaluation of the many-worlds and many-minds interpretations of quantum mechanics and David Lewis' modal realism, and even for traffic planning. After refuting several popular doctrines about the implications of cosmological fine-tuning, we present an informal model of the observational selection effects involved. Next, we evaluate attempts that have been made to codify the correct way of reasoning about such effects - in the form of so-called "anthropic principles" - and find them wanting. A new principle is proposed to replace them, the Self-Sampling Assumption (SSA). A series of thought experiments are presented showing that SSA should be used in a wide range of contexts. We also show that SSA gives better methodological guidance than rival principles in a number of scientific fields. We then explain how SSA can lead to the infamous Doomsday argument. Identifying what additional assumptions are required to derive this consequence, we suggest alternative conclusions. We refute several objections against the Doomsday argument and show that SSA does not give rise to paradoxical "observer-relative chances" as has been alleged. However, we discover new consequences of SSA that are more counterintuitive than the Doomsday argument. Using these results, we construct a version of SSA that avoids the paradoxes and does not lead to the Doomsday argument but caters to legitimate methodological needs. This modified principle is used as the basis for the first mathematically explicit theory of reasoning under observational selection effects. This observation theory resolves the range of conundrums associated with anthropic reasoning and provides a general framework for evaluating theories about the large-scale structure of the world and the distribution of observers within it

    Ethical issues in advanced artificial intelligence

    Get PDF
    corecore