11,766 research outputs found

    Global Solutions vs. Local Solutions for the AI Safety Problem

    Get PDF
    There are two types of artificial general intelligence (AGI) safety solutions: global and local. Most previously suggested solutions are local: they explain how to align or “box” a specific AI (Artificial Intelligence), but do not explain how to prevent the creation of dangerous AI in other places. Global solutions are those that ensure any AI on Earth is not dangerous. The number of suggested global solutions is much smaller than the number of proposed local solutions. Global solutions can be divided into four groups: 1. No AI: AGI technology is banned or its use is otherwise prevented; 2. One AI: the first superintelligent AI is used to prevent the creation of any others; 3. Net of AIs as AI police: a balance is created between many AIs, so they evolve as a net and can prevent any rogue AI from taking over the world; 4. Humans inside AI: humans are augmented or part of AI. We explore many ideas, both old and new, regarding global solutions for AI safety. They include changing the number of AI teams, different forms of “AI Nanny” (non-self-improving global control AI system able to prevent creation of dangerous AIs), selling AI safety solutions, and sending messages to future AI. Not every local solution scales to a global solution or does it ethically and safely. The choice of the best local solution should include understanding of the ways in which it will be scaled up. Human-AI teams or a superintelligent AI Service as suggested by Drexler may be examples of such ethically scalable local solutions, but the final choice depends on some unknown variables such as the speed of AI progres

    Insurability Challenges Under Uncertainty: An Attempt to Use the Artificial Neural Network for the Prediction of Losses from Natural Disasters

    Get PDF
    The main difficulty for natural disaster insurance derives from the uncertainty of an event’s damages. Insurers cannot precisely appreciate the weight of natural hazards because of risk dependences. Insurability under uncertainty first requires an accurate assessment of entire damages. Insured and insurers both win when premiums calculate risk properly. In such cases, coverage will be available and affordable. Using the artificial neural network – a technique rooted in artificial intelligence - insurers can predict annual natural disaster losses. There are many types of artificial neural network models. In this paper we use the multilayer perceptron neural network, the most accommodated to the prediction task. In fact, if we provide the natural disaster explanatory variables to the developed neural network, it calculates perfectly the potential annual losses for the studied country.Natural disaster losses, Insurability, Uncertainty, Multilayer perceptron neural network, Prediction.

    Catastrophic Risk from Rapid Developments in Artificial Intelligence: what is yet to be addressed and how might New Zealand policymakers respond?

    Get PDF
    This article describes important possible scenarios in which rapid advances in artificial intelligence (AI) pose multiple risks, including to democracy and for inter-state conflict. In parallel with other countries, New Zealand needs policies to monitor, anticipate and mitigate global catastrophic and existential risks from advanced new technologies. A dedicated policy capacity could translate emerging research and policy options into the New Zealand context. It could also identify how New Zealand could best contribute to global solutions. It is desirable that the potential benefits of AI are realised, while the risks are also mitigated to the greatest extent possible

    Continual Local Training for Better Initialization of Federated Models

    Full text link
    Federated learning (FL) refers to the learning paradigm that trains machine learning models directly in the decentralized systems consisting of smart edge devices without transmitting the raw data, which avoids the heavy communication costs and privacy concerns. Given the typical heterogeneous data distributions in such situations, the popular FL algorithm \emph{Federated Averaging} (FedAvg) suffers from weight divergence and thus cannot achieve a competitive performance for the global model (denoted as the \emph{initial performance} in FL) compared to centralized methods. In this paper, we propose the local continual training strategy to address this problem. Importance weights are evaluated on a small proxy dataset on the central server and then used to constrain the local training. With this additional term, we alleviate the weight divergence and continually integrate the knowledge on different local clients into the global model, which ensures a better generalization ability. Experiments on various FL settings demonstrate that our method significantly improves the initial performance of federated models with few extra communication costs.Comment: This paper has been accepted to 2020 IEEE International Conference on Image Processing (ICIP 2020

    Global Risks 2015, 10th Edition.

    Get PDF
    The 2015 edition of the Global Risks report completes a decade of highlighting the most significant long-term risks worldwide, drawing on the perspectives of experts and global decision-makers. Over that time, analysis has moved from risk identification to thinking through risk interconnections and the potentially cascading effects that result. Taking this effort one step further, this year's report underscores potential causes as well as solutions to global risks. Not only do we set out a view on 28 global risks in the report's traditional categories (economic, environmental, societal, geopolitical and technological) but also we consider the drivers of those risks in the form of 13 trends. In addition, we have selected initiatives for addressing significant challenges, which we hope will inspire collaboration among business, government and civil society communitie

    Artificial intelligence and UK national security: Policy considerations

    Get PDF
    RUSI was commissioned by GCHQ to conduct an independent research study into the use of artificial intelligence (AI) for national security purposes. The aim of this project is to establish an independent evidence base to inform future policy development regarding national security uses of AI. The findings are based on in-depth consultation with stakeholders from across the UK national security community, law enforcement agencies, private sector companies, academic and legal experts, and civil society representatives. This was complemented by a targeted review of existing literature on the topic of AI and national security. The research has found that AI offers numerous opportunities for the UK national security community to improve efficiency and effectiveness of existing processes. AI methods can rapidly derive insights from large, disparate datasets and identify connections that would otherwise go unnoticed by human operators. However, in the context of national security and the powers given to UK intelligence agencies, use of AI could give rise to additional privacy and human rights considerations which would need to be assessed within the existing legal and regulatory framework. For this reason, enhanced policy and guidance is needed to ensure the privacy and human rights implications of national security uses of AI are reviewed on an ongoing basis as new analysis methods are applied to data
    • …
    corecore