28,213 research outputs found
Death and Suicide in Universal Artificial Intelligence
Reinforcement learning (RL) is a general paradigm for studying intelligent
behaviour, with applications ranging from artificial intelligence to psychology
and economics. AIXI is a universal solution to the RL problem; it can learn any
computable environment. A technical subtlety of AIXI is that it is defined
using a mixture over semimeasures that need not sum to 1, rather than over
proper probability measures. In this work we argue that the shortfall of a
semimeasure can naturally be interpreted as the agent's estimate of the
probability of its death. We formally define death for generally intelligent
agents like AIXI, and prove a number of related theorems about their behaviour.
Notable discoveries include that agent behaviour can change radically under
positive linear transformations of the reward signal (from suicidal to
dogmatically self-preserving), and that the agent's posterior belief that it
will survive increases over time.Comment: Conference: Artificial General Intelligence (AGI) 2016 13 pages, 2
figure
Recommended from our members
Triple Helix, Fall 2018
Table of Contents: Science Agenda: The Politics of Grant Writing / by Kavya Rajesh (p. 4) -- From the Experts / by Katherine Bruner (p. 5) -- 3D Printed Drugs: The Future of Pharmaceuticals / by Ethan Wang (p. 6) -- Computerized Markets: Wall Street Takeover / by James Kiraly (p. 10) -- The Evolution of Fear / by Alisha Ahmed (p. 14) -- ADDing Up / by Victor Liaw (p. 18) -- The Clone Wars / by Jina Zhou (p. 22) -- Physician-Assisted Suicide: Drawing the Line / by Haley Wolf (p. 26) -- Supervised Injection Sites / by Alex Gajewski (p. 30) -- On Emerging Medicalization and Health Care / by Patrick Lee (p. 33) -- The Future of Human Gene Modifications / by Elizabeth Robinson (p. 36)College of Natural SciencesUT LibrariesLiberal Art
Can intelligence explode?
The technological singularity refers to a hypothetical scenario in which technological advances virtually explode. The most popular scenario is the creation of super-intelligent algorithms that recursively create ever higher intelligences. It took many decades for these ideas to spread from science fiction to popular science magazines and finally to attract the attention of serious philosophers. David Chalmers' (JCS, 2010) article is the first comprehensive philosophical analysis of the singularity in a respected philosophy journal. The motivation of my article is to augment Chalmers' and to discuss some issues not addressed by him, in particular what it could mean for intelligence to explode. In this course, I will (have to) provide a more careful treatment of what intelligence actually is, separate speed from intelligence explosion, compare what super-intelligent participants and classical human observers might experience and do, discuss immediate implications for the diversity and value of life, consider possible bounds on intelligence, and contemplate intelligences right at the singularity
Suicide Screening in Primary Care: Use of an Electronic Screener to Assess Suicidality and Improve Provider Follow-Up for Adolescents
Purpose
The purpose of this study was to assess the feasibility of using an existing computer decision support system to screen adolescent patients for suicidality and provide follow-up guidance to clinicians in a primary care setting. Predictors of patient endorsement of suicidality and provider documentation of follow-up were examined.
Methods
A prospective cohort study was conducted to examine the implementation of a CDSS that screened adolescent patients for suicidality and provided follow-up recommendations to providers. The intervention was implemented for patients aged 12–20 years in two primary care clinics in Indianapolis, Indiana.
Results
The sample included 2,134 adolescent patients (51% female; 60% black; mean age = 14.6 years [standard deviation = 2.1]). Just over 6% of patients screened positive for suicidality. A positive endorsement of suicidality was more common among patients who were female, depressed, and seen by an adolescent−medicine board-certified provider as opposed to general pediatric provider. Providers documented follow-up action for 83% of patients who screened positive for suicidality. Documentation of follow-up action was correlated with clinic site and Hispanic race. The majority of patients who endorsed suicidality (71%) were deemed not actively suicidal after assessment by their provider.
Conclusions
Incorporating adolescent suicide screening and provider follow-up guidance into an existing computer decision support system in primary care is feasible and well utilized by providers. Female gender and depressive symptoms are consistently associated with suicidality among adolescents, although not all suicidal adolescents are depressed. Universal use of a multi-item suicide screener that assesses recency might more effectively identify suicidal adolescents
Restricted Complexity, General Complexity
Why has the problematic of complexity appeared so late? And why would it be justified
Recommended from our members
Double elevation: Autonomous weapons and the search for an irreducible law of war
What should be the role of law in response to the spread of artificial intelligence in war? Fuelled by both public and private investment, military technology is accelerating towards increasingly autonomous weapons, as well as the merging of humans and machines. Contrary to much of the contemporary debate, this is not a paradigm change; it is the intensification of a central feature in the relationship between technology and war: Double elevation, above one's enemy and above oneself. Elevation above one's enemy aspires to spatial, moral, and civilizational distance. Elevation above oneself reflects a belief in rational improvement that sees humanity as the cause of inhumanity and de-humanization as our best chance for humanization. The distance of double elevation is served by the mechanization of judgement. To the extent that judgement is seen as reducible to algorithm, law becomes the handmaiden of mechanization. In response, neither a focus on questions of compatibility nor a call for a 'ban on killer robots' help in articulating a meaningful role for law. Instead, I argue that we should turn to a long-standing philosophical critique of artificial intelligence, which highlights not the threat of omniscience, but that of impoverished intelligence. Therefore, if there is to be a meaningful role for law in resisting double elevation, it should be law encompassing subjectivity, emotion and imagination, law irreducible to algorithm, a law of war that appreciates situated judgement in the wielding of violence for the collective
- …