7,757 research outputs found

    The 1990 progress report and future plans

    Get PDF
    This document describes the progress and plans of the Artificial Intelligence Research Branch (RIA) at ARC in 1990. Activities span a range from basic scientific research to engineering development and to fielded NASA applications, particularly those applications that are enabled by basic research carried out at RIA. Work is conducted in-house and through collaborative partners in academia and industry. Our major focus is on a limited number of research themes with a dual commitment to technical excellence and proven applicability to NASA short, medium, and long-term problems. RIA acts as the Agency's lead organization for research aspects of artificial intelligence, working closely with a second research laboratory at JPL and AI applications groups at all NASA centers

    Stochastic Explanations: Learning From Mistakes In Stochastic Domains

    Get PDF
    Explaining anomalies in a stochastic environment is a complex task and a very active research field. In this thesis we present our contributions to this fascinating topic by introducing the concept of Stochastic Explanations, which associate to any unexpected event or anomaly a probability distribution over the possible causes of the anomaly. In this thesis, we present the EXP_GEN agent, which uses stochastic explanations to explain anomalies, implementing a synergy of Case-Based Reasoning and Reinforcement Learning mechanisms to, respectively, store and retrieve as cases (event, stochastic explanation) pairs, and to learn the probability distribution of the stochastic explanation for each anomaly. We claim that an agent using stochastic explanations will react faster to unexpected events than an agent that uses a deterministic approach to explain anomalies. We compare the performance of EXP_GEN against an agent which utilizes a greedy heuristic to explain anomalies.Unexpected events should be considered as a huge opportunity for an intelligent agent to learn something new about the environment. Providing explanations to these events is just the first step: we must reutilize the previous experience and explanations to avoid making the same mistakes in the future. For this purpose, we present the GENERATE_GOAL+ algorithm, which replaces the basic goal generation mechanism of the EXP_GEN agent. This algorithm takes into consideration the previous mistakes made by the agent and uses them to generate better goals. In this way, the agent is effectively learning from mistakes, improving its performance over time. We consider the ability to learn from one\u27s own mistakes crucial to the implementation of more complex intelligent agents in the future. In our results, we show how the EXP_GEN agent that uses GENERATE_GOAL+ greatly outperforms the same agent that uses naïve goal generation.To show the effectiveness of learning from mistakes, we also present an enhanced version of the GENERATE_GOAL+ algorithm, called CombatPlan. Here we show how the EXP_GEN agent can handle broader and more complex scenarios and learn effectively by the mistakes it makes. To show this, we train the agent for a certain number of iterations, so that it makes mistakes and explains them. We then show how the performance of the agent on the test scenario is directly related to the amount of training performed, or in other words, to the amount of explanations generated during the training phase

    Automated Service Composition Using AI Planning and Beyond

    Get PDF
    Automated Service Composition is one of the ``grand challenges'' in the area of Service-Oriented Computing. Mike Papazoglou was not only one of the first researchers who identified the importance of the problem, but was also one of the first proposers of formulating it as an AI planning problem. Unfortunately, classical planning algorithms were not sufficient and a number of extensions were needed, e.g., to support extended (rich) goal languages to capture the user intentions, to plan under uncertainty caused by the non-deterministic nature of services; issues that where formulated (and, partially addressed) by Mike, being one of his key contributions to the service community

    Fourth Conference on Artificial Intelligence for Space Applications

    Get PDF
    Proceedings of a conference held in Huntsville, Alabama, on November 15-16, 1988. The Fourth Conference on Artificial Intelligence for Space Applications brings together diverse technical and scientific work in order to help those who employ AI methods in space applications to identify common goals and to address issues of general interest in the AI community. Topics include the following: space applications of expert systems in fault diagnostics, in telemetry monitoring and data collection, in design and systems integration; and in planning and scheduling; knowledge representation, capture, verification, and management; robotics and vision; adaptive learning; and automatic programming

    Multi-robot systems in cognitive factories: representation, reasoning, execution and monitoring

    Get PDF
    We propose the use of causality-based formal representation and automated reasoning methods from artificial intelligence to endow multiple teams of robots in a factory, with high-level cognitive capabilities, such as optimal planning and diagnostic reasoning. We present a framework that features bilateral interaction between task and motion planning, and embeds geometric reasoning in causal reasoning. We embed this planning framework inside an execution and monitoring framework and show its applicability on multi-robot systems. In particular, we focus on two domains that are relevant to cognitive factories: i) a manipulation domain with multiple robots working concurrently / co-operatively to achieve a common goal and ii) a factory domain with multiple teams of robots utilizing shared resources. In the manipulation domain two pantograph robots perform a complex task that requires true concurrency. The monitoring framework checks plan execution for two sorts of failures: collisions with unknown obstacles and change of the world due to human interventions. Depending on the cause of the failures, recovery is done by calling the motion planner (to find a different trajectory) or the causal reasoner (to find a new task plan). Therefore, recovery relies on not only motion planning but also causal reasoning. We extend our planning and monitoring framework for the factory domain with multiple teams of robots by introducing algorithms for finding optimal decoupled plans and diagnosing the cause of a failure/discrepancy (e.g., robots may get broken or tasks may get reassigned to teams). We show the applicability of these algorithms on an intelligent factory scenario through dynamic simulations and physical experiments

    Cooperative Monitoring to Diagnose Multiagent Plans

    Get PDF
    Diagnosing the execution of a Multiagent Plan (MAP) means identifying and explaining action failures (i.e., actions that did not reach their expected effects). Current approaches to MAP diagnosis are substantially centralized, and assume that action failures are inde-pendent of each other. In this paper, the diagnosis of MAPs, executed in a dynamic and partially observable environment, is addressed in a fully distributed and asynchronous way; in addition, action failures are no longer assumed as independent of each other. The paper presents a novel methodology, named Cooperative Weak-Committed Moni-toring (CWCM), enabling agents to cooperate while monitoring their own actions. Coop-eration helps the agents to cope with very scarcely observable environments: what an agent cannot observe directly can be acquired from other agents. CWCM exploits nondetermin-istic action models to carry out two main tasks: detecting action failures and building trajectory-sets (i.e., structures representing the knowledge an agent has about the environ-ment in the recent past). Relying on trajectory-sets, each agent is able to explain its own action failures in terms of exogenous events that have occurred during the execution of the actions themselves. To cope with dependent failures, CWCM is coupled with a diagnostic engine that distinguishes between primary and secondary action failures. An experimental analysis demonstrates that the CWCM methodology, together with the proposed diagnostic inferences, are effective in identifying and explaining action failures even in scenarios where the system observability is significantly reduced. 1

    Optimal global planning for cognitive factories with multiple teams of heterogeneous robots

    Get PDF
    We consider a cognitive factory domain with multiple teams of heterogeneous robots where the goal is for all teams to complete their tasks as soon as possible to achieve overall shortest delivery time for a given manufacturing order. Should the need arise, teams help each other by lending robots. This domain is challenging in the following ways: different capabilities of heterogeneous robots need to be considered in the model; discrete symbolic representation and reasoning need to be integrated with continuous external computations to find feasible plans (e.g., to avoid collisions); a coordination of the teams should be found for an optimal feasible global plan (with minimum makespan); in case of an encountered discrepancy/failure during plan execution, if the discrepancy/failure prevents the execution of the rest of the plan, then finding a diagnosis for the discrepancy/failure and recovering from the plan failure is required to achieve the goals. We introduce a formal planning, execution and monitoring framework to address these challenges, by utilizing logic-based formalisms that allow us to embed external computations in continuous spaces, and the relevant state-of-the-art automated reasoners. To find a global plan with minimum makespan, we propose a semi-distributed approach that utilizes a mediator subject to the condition that the teams and the mediator do not know about each other’s workspaces or tasks. According to this approach, 1) the mediator gathers sufficient information from the teams about when they can/need lend/borrow how many and what kind of robots, 2) based on this information, the mediator computes an optimal coordination of the teams and informs each team about this coordination, 3) each team computes its own optimal local plan to achieve its own tasks taking into account the information conveyed by the mediator as well as external computations to avoid collisions, 4) these optimal local plans are merged into an optimal global plan. For the first and the third stages, we utilize methods and tools of hybrid reasoning. For the second stage, we formulate the problem of finding an optimal coordination of teams that can help each other, prove its intractability, and describe how to solve this problem using existing automated reasoners. For the last stage, we prove the optimality of the global plan. For execution and monitoring of an optimal global plan, we introduce a formal framework that provides methods to diagnose failures due to broken robots, and to handle changes in manufacturing orders and in workspaces. We illustrate the applicability of our approaches on various scenarios of cognitive factories with dynamic simulations and physical implementation

    Evaluation of the HARDMAN comparability methodology for manpower, personnel and training

    Get PDF
    The methodology evaluation and recommendation are part of an effort to improve Hardware versus Manpower (HARDMAN) methodology for projecting manpower, personnel, and training (MPT) to support new acquisition. Several different validity tests are employed to evaluate the methodology. The methodology conforms fairly well with both the MPT user needs and other accepted manpower modeling techniques. Audits of three completed HARDMAN applications reveal only a small number of potential problem areas compared to the total number of issues investigated. The reliability study results conform well with the problem areas uncovered through the audits. The results of the accuracy studies suggest that the manpower life-cycle cost component is only marginally sensitive to changes in other related cost variables. Even with some minor problems, the methodology seem sound and has good near term utility to the Army. Recommendations are provided to firm up the problem areas revealed through the evaluation
    corecore