12,639 research outputs found

    Robust Computer Algebra, Theorem Proving, and Oracle AI

    Get PDF
    In the context of superintelligent AI systems, the term "oracle" has two meanings. One refers to modular systems queried for domain-specific tasks. Another usage, referring to a class of systems which may be useful for addressing the value alignment and AI control problems, is a superintelligent AI system that only answers questions. The aim of this manuscript is to survey contemporary research problems related to oracles which align with long-term research goals of AI safety. We examine existing question answering systems and argue that their high degree of architectural heterogeneity makes them poor candidates for rigorous analysis as oracles. On the other hand, we identify computer algebra systems (CASs) as being primitive examples of domain-specific oracles for mathematics and argue that efforts to integrate computer algebra systems with theorem provers, systems which have largely been developed independent of one another, provide a concrete set of problems related to the notion of provable safety that has emerged in the AI safety community. We review approaches to interfacing CASs with theorem provers, describe well-defined architectural deficiencies that have been identified with CASs, and suggest possible lines of research and practical software projects for scientists interested in AI safety.Comment: 15 pages, 3 figure

    Long-term health outcomes after exposure to repeated concussion in elite level: rugby union players

    Get PDF
    Background: There is continuing concern about effects of concussion in athletes, including risk of the neurodegenerative disease chronic traumatic encephalopathy. However, information on long-term health and wellbeing in former athletes is limited. Method: Outcome after exposure to repeated brain injury was investigated in 52 retired male Scottish international rugby players (RIRP) and 29 male controls who were similar in age and social deprivation. Assessment included history of playing rugby and traumatic brain injury, general and mental health, life stress, concussion symptoms, cognitive function, disability and markers of chronic stress (allostatic load). Results: The estimated number of concussions in RIRP averaged 14 (median=7; IQR 5-40). Performance was poorer in RIRP than controls on a test of verbal learning (p=0.022) and of fine co-ordination of the dominant hand (p=0.038) and not significantly different on other cognitive tests (p>0.05). There were no significant associations between number of concussions and performance on cognitive tests. Other than a higher incidence of cardiovascular disease in controls, no group differences were detected in general or mental health or estimates of allostatic load. In RIRP, persisting symptoms attributed to concussion were more common if reporting more than nine concussions (p=0.028), although these symptoms were not perceived to affect social or work functioning. Conclusions: Despite a high number of concussions in RIRP, differences in mental health, social or work functioning were not found late after injury. Subtle group differences were detected on two cognitive tests, the cause of which is uncertain. Prospective group comparison studies on representative cohorts are required

    Anti-Trust and Economic Theory: Some Observations from the US Experience

    Get PDF
    Recent developments in US anti-trust can be characterised as reflecting the uneasy interaction of two quite separate phenomena: first, the increased emphasis on economic analysis as the overriding organising principle of anti-trust policy and on economic efficiency as the primary (perhaps only) relevant goal for anti-trust; second, the long-standing reluctance of the federal judiciary to involve itself in any substantive economic analysis, and the preference, instead, for simple rules of thumb or ‘pigeon holes’ to sort out lawful from unlawful conduct. The result has been that while economics has played a major role, it has not influenced American anti-trust as thoroughly or as uniformly as might have been imagined; rather the extent and the nature of its influence have depended on the degree to which the relevant economics could be reduced to the kind of simple rules or pigeon holes that the judiciary favours. The present paper will illustrate that theme, first by reporting on the two developments separately and then by illustrating their joint influence with reference to two important areas of American anti-trust: predatory conduct and so-called vertical restraints. Finally, a contrast will be made between judicial development in those two areas and recent American merger policy which, it is argued, is carried out largely independently of the judiciary, and hence the opportunities for economics to influence the process are less inhibited by the judicial reluctance to undertake extensive economic analysis

    Mammalian Value Systems

    Get PDF
    Characterizing human values is a topic deeply interwoven with the sciences, humanities, political philosophy, art, and many other human endeavors. In recent years, a number of thinkers have argued that accelerating trends in computer science, cognitive science, and related disciplines foreshadow the creation of intelligent machines which meet and ultimately surpass the cognitive abilities of human beings, thereby entangling an understanding of human values with future technological development. Contemporary research accomplishments suggest increasingly sophisticated AI systems becoming widespread and responsible for managing many aspects of the modern world, from preemptively planning users’ travel schedules and logistics, to fully autonomous vehicles, to domestic robots assisting in daily living. The extrapolation of these trends has been most forcefully described in the context of a hypothetical “intelligence explosion,” in which the capabilities of an intelligent software agent would rapidly increase due to the presence of feedback loops unavailable to biological organisms. The possibility of superintelligent agents, or simply the widespread deployment of sophisticated, autonomous AI systems, highlights an important theoretical problem: the need to separate the cognitive and rational capacities of an agent from the fundamental goal structure, or value system, which constrains and guides the agent’s actions. The “value alignment problem” is to specify a goal structure for autonomous agents compatible with human values. In this brief article, we suggest that recent ideas from affective neuroscience and related disciplines aimed at characterizing neurological and behavioral universals in the mammalian kingdom provide important conceptual foundations relevant to describing human values. We argue that the notion of “mammalian value systems” points to a potential avenue for fundamental research in AI safety and AI ethics

    Mammalian Value Systems

    Get PDF
    Characterizing human values is a topic deeply interwoven with the sciences, humanities, political philosophy, art, and many other human endeavors. In recent years, a number of thinkers have argued that accelerating trends in computer science, cognitive science, and related disciplines foreshadow the creation of intelligent machines which meet and ultimately surpass the cognitive abilities of human beings, thereby entangling an understanding of human values with future technological development. Contemporary research accomplishments suggest increasingly sophisticated AI systems becoming widespread and responsible for managing many aspects of the modern world, from preemptively planning users’ travel schedules and logistics, to fully autonomous vehicles, to domestic robots assisting in daily living. The extrapolation of these trends has been most forcefully described in the context of a hypothetical “intelligence explosion,” in which the capabilities of an intelligent software agent would rapidly increase due to the presence of feedback loops unavailable to biological organisms. The possibility of superintelligent agents, or simply the widespread deployment of sophisticated, autonomous AI systems, highlights an important theoretical problem: the need to separate the cognitive and rational capacities of an agent from the fundamental goal structure, or value system, which constrains and guides the agent’s actions. The “value alignment problem” is to specify a goal structure for autonomous agents compatible with human values. In this brief article, we suggest that recent ideas from affective neuroscience and related disciplines aimed at characterizing neurological and behavioral universals in the mammalian kingdom provide important conceptual foundations relevant to describing human values. We argue that the notion of “mammalian value systems” points to a potential avenue for fundamental research in AI safety and AI ethics

    Mammalian Value Systems

    Get PDF
    Characterizing human values is a topic deeply interwoven with the sciences, humanities, political philosophy, art, and many other human endeavors. In recent years, a number of thinkers have argued that accelerating trends in computer science, cognitive science, and related disciplines foreshadow the creation of intelligent machines which meet and ultimately surpass the cognitive abilities of human beings, thereby entangling an understanding of human values with future technological development. Contemporary research accomplishments suggest increasingly sophisticated AI systems becoming widespread and responsible for managing many aspects of the modern world, from preemptively planning users’ travel schedules and logistics, to fully autonomous vehicles, to domestic robots assisting in daily living. The extrapolation of these trends has been most forcefully described in the context of a hypothetical “intelligence explosion,” in which the capabilities of an intelligent software agent would rapidly increase due to the presence of feedback loops unavailable to biological organisms. The possibility of superintelligent agents, or simply the widespread deployment of sophisticated, autonomous AI systems, highlights an important theoretical problem: the need to separate the cognitive and rational capacities of an agent from the fundamental goal structure, or value system, which constrains and guides the agent’s actions. The “value alignment problem” is to specify a goal structure for autonomous agents compatible with human values. In this brief article, we suggest that recent ideas from affective neuroscience and related disciplines aimed at characterizing neurological and behavioral universals in the mammalian kingdom provide important conceptual foundations relevant to describing human values. We argue that the notion of “mammalian value systems” points to a potential avenue for fundamental research in AI safety and AI ethics
    • 

    corecore