16,683 research outputs found

    Robust Computer Algebra, Theorem Proving, and Oracle AI

    Get PDF
    In the context of superintelligent AI systems, the term "oracle" has two meanings. One refers to modular systems queried for domain-specific tasks. Another usage, referring to a class of systems which may be useful for addressing the value alignment and AI control problems, is a superintelligent AI system that only answers questions. The aim of this manuscript is to survey contemporary research problems related to oracles which align with long-term research goals of AI safety. We examine existing question answering systems and argue that their high degree of architectural heterogeneity makes them poor candidates for rigorous analysis as oracles. On the other hand, we identify computer algebra systems (CASs) as being primitive examples of domain-specific oracles for mathematics and argue that efforts to integrate computer algebra systems with theorem provers, systems which have largely been developed independent of one another, provide a concrete set of problems related to the notion of provable safety that has emerged in the AI safety community. We review approaches to interfacing CASs with theorem provers, describe well-defined architectural deficiencies that have been identified with CASs, and suggest possible lines of research and practical software projects for scientists interested in AI safety.Comment: 15 pages, 3 figure

    Safety Analysis Methods for Complex Systems in Aviation

    Full text link
    Each new concept of operation and equipment generation in aviation becomes more automated, integrated and interconnected. In the case of Unmanned Aircraft Systems (UAS), this evolution allows drastically decreasing aircraft weight and operational cost, but these benefits are also realized in highly automated manned aircraft and ground Air Traffic Control (ATC) systems. The downside of these advances is overwhelmingly more complex software and hardware, making it harder to identify potential failure paths. Although there are mandatory certification processes based on broadly accepted standards, such as ARP4754 and its family, ESARR 4 and others, these standards do not allow proof or disproof of safety of disruptive technology changes, such as GBAS Precision Approaches, Autonomous UAS, aircraft self-separation and others. In order to leverage the introduction of such concepts, it is necessary to develop solid knowledge on the foundations of safety in complex systems and use this knowledge to elaborate sound demonstrations of either safety or unsafety of new system designs. These demonstrations at early design stages will help reducing costs both on development of new technology as well as reducing the risk of such technology causing accidents when in use. This paper presents some safety analysis methods which are not in the industry standards but which we identify as having benefits for analyzing safety of advanced technological concepts in aviation

    Developing a distributed electronic health-record store for India

    Get PDF
    The DIGHT project is addressing the problem of building a scalable and highly available information store for the Electronic Health Records (EHRs) of the over one billion citizens of India

    Bi-Directional Safety Analysis for Product-Line, Multi-Agent Systems

    Get PDF
    Abstract. Safety-critical systems composed of highly similar, semi-autonomous agents are being developed in several application domains. An example of such multi-agent systems is a fleet, or “constellation ” of satellites. In constellations of satellites, each satellite is commonly treated as a distinct autonomous agent that must cooperate to achieve higher-level constellation goals. In previous work, we have shown that modeling a constellation of satellites or spacecraft as a product line of agents (where the agents have many shared commonalities and a few key differences) enables reuse of software analysis and design assets. We have also previously developed efficient safety analysis techniques for product lines. We now propose the use of Bi-Directional Safety Analysis (BDSA) to aid in system certification. We extend BDSA to product lines of multi-agent systems and show how the analysis artifacts thus produced contribute to the software’s safety case for certification purposes. The product-line approach lets us reuse portions of the safety analysis for multiple agents, significantly reducing the burden of certification. We motivate and illustrate this work through a specific application, a product-line, multi-agent satellite constellation

    The future of Cybersecurity in Italy: Strategic focus area

    Get PDF
    This volume has been created as a continuation of the previous one, with the aim of outlining a set of focus areas and actions that the Italian Nation research community considers essential. The book touches many aspects of cyber security, ranging from the definition of the infrastructure and controls needed to organize cyberdefence to the actions and technologies to be developed to be better protected, from the identification of the main technologies to be defended to the proposal of a set of horizontal actions for training, awareness raising, and risk management

    NASA space station automation: AI-based technology review

    Get PDF
    Research and Development projects in automation for the Space Station are discussed. Artificial Intelligence (AI) based automation technologies are planned to enhance crew safety through reduced need for EVA, increase crew productivity through the reduction of routine operations, increase space station autonomy, and augment space station capability through the use of teleoperation and robotics. AI technology will also be developed for the servicing of satellites at the Space Station, system monitoring and diagnosis, space manufacturing, and the assembly of large space structures

    Characterisation framework of key policy, regulatory and governance dynamics and impacts upon European food value chains: Fairer trading practices, food integrity, and sustainability collaborations. : VALUMICS project “Understanding Food Value Chains and Network Dynamics” funded by EU Horizon 2020 G.A. No 727243. Deliverable D3.3

    Get PDF
    The report provides a framework that categorises the different European Union (EU) policies, laws and governance actions identified as impacting upon food value chains in the defined areas of: fairer trading practices, food integrity (food safety and authenticity), and sustainability collaborations along food value chains. A four-stage framework is presented and illustrated with examples. The evidence shows that European Union policy activity impacting upon food value chain dynamics is increasing, both in terms of the impacts of policies upon the chains, and, in terms of addressing some of the more contentious outcomes of these dynamics. A number of policy priorities are at play in addressing the outcomes of food value chain dynamics. unevenness of the distribution of profit within food value chains, notably to farmers. Regulation of food safety and aspects of authenticity has been a key focus for two decades to ensure a functioning single market while ensuring consumer health and wellbeing. A food chain length perspective has been attempted, notably through regulations such as the General Food Law, and the rationalisation of the Official Controls on food and feed safety. However, there are still gaps in the effective monitoring and transparency of food safety and of food integrity along value chains, as exemplified by misleading claims and criminal fraud. This has led to renewed policy actions over food fraud, in particular. EU regulations, policies and related governance initiatives provide an important framework for national-level actions for EU member states and for EEA members. The more tightly EU-regulated areas, such as food safety, see fewer extra initiatives, but where there is a more general strategic policy and governance push, such as food waste reduction or food fraud, there is greater independent state-level activity. Likewise, there is much more variation in the application of both national and European (Competition) law to govern unfair trading practices impacting upon food value chains. This report presents the findings of a survey of members from the VALUMICS stakeholder platform, that were policy facing food value chain stakeholders across selected European countries, including both EU and EEA Member States. The survey was conducted to check the significance of the main policies identified in the mapping exercise at EU and national levels and so to incorporate the views of stakeholders in the research. The responses suggest the policy concerns identified in EU and national-level research resonate with food value chain stakeholders in participating nations. The report concludes by exploring in more detail how the themes of fairness and of transparency are being handled in the policy activities presented. Highlighted are the ways that both fairness and transparency can be extended within the existing frameworks of EU policy activity. The findings in this report provide an important context for further and detailed research analysis of the workings and dynamics of European food value chains under the VALUMICS project

    Enhancing Exploration and Safety in Deep Reinforcement Learning

    Get PDF
    A Deep Reinforcement Learning (DRL) agent tries to learn a policy maximizing a long-term objective by trials and errors in large state spaces. However, this learning paradigm requires a non-trivial amount of interactions in the environment to achieve good performance. Moreover, critical applications, such as robotics, typically involve safety criteria to consider while designing novel DRL solutions. Hence, devising safe learning approaches with efficient exploration is crucial to avoid getting stuck in local optima, failing to learn properly, or causing damages to the surrounding environment. This thesis focuses on developing Deep Reinforcement Learning algorithms to foster efficient exploration and safer behaviors in simulation and real domains of interest, ranging from robotics to multi-agent systems. To this end, we rely both on standard benchmarks, such as SafetyGym, and robotic tasks widely adopted in the literature (e.g., manipulation, navigation). This variety of problems is crucial to assess the statistical significance of our empirical studies and the generalization skills of our approaches. We initially benchmark the sample efficiency versus performance trade-off between value-based and policy-gradient algorithms. This part highlights the benefits of using non-standard simulation environments (i.e., Unity), which also facilitates the development of further optimization for DRL. We also discuss the limitations of standard evaluation metrics (e.g., return) in characterizing the actual behaviors of a policy, proposing the use of Formal Verification (FV) as a practical methodology to evaluate behaviors over desired specifications. The second part introduces Evolutionary Algorithms (EAs) as a gradient-free complimentary optimization strategy. In detail, we combine population-based and gradient-based DRL to diversify exploration and improve performance both in single and multi-agent applications. For the latter, we discuss how prior Multi-Agent (Deep) Reinforcement Learning (MARL) approaches hinder exploration, proposing an architecture that favors cooperation without affecting exploration
    • …
    corecore