8 research outputs found

    Causal simulation and sensor planning in predictive monitoring

    Get PDF
    Two issues are addressed which arise in the task of detecting anomalous behavior in complex systems with numerous sensor channels: how to adjust alarm thresholds dynamically, within the changing operating context of the system, and how to utilize sensors selectively, so that nominal operation can be verified reliably without processing a prohibitive amount of sensor data. The approach involves simulation of a causal model of the system, which provides information on expected sensor values, and on dependencies between predicted events, useful in assessing the relative importance of events so that sensor resources can be allocated effectively. The potential applicability of this work to the execution monitoring of robot task plans is briefly discussed

    Adaptive planning for distributed systems using goal accomplishment tracking

    Get PDF
    Goal accomplishment tracking is the process of monitoring the progress of a task or series of tasks towards completing a goal. Goal accomplishment tracking is used to monitor goal progress in a variety of domains, including workflow processing, teleoperation and industrial manufacturing. Practically, it involves the constant monitoring of task execution, analysis of this data to determine the task progress and notification of interested parties. This information is usually used in a passive way to observe goal progress. However, responding to this information may prevent goal failures. In addition, responding proactively in an opportunistic way can also lead to goals being completed faster. This paper proposes an architecture to support the adaptive planning of tasks for fault tolerance or opportunistic task execution based on goal accomplishment tracking. It argues that dramatically increased performance can be gained by monitoring task execution and altering plans dynamically

    Generating Human Motion by Symbolic Reasoning

    Get PDF
    This paper describes work on applying AI planning methods to generate human body motion for the purpose of animation. It is based on the fact that although we do not know how the body actually controls massively redundant degrees of freedom of its joints and moves in given situations, the appropriateness of specific behavior for particular conditions can be axiomatized at a gross level using commonsensical observations. Given the motion axioms (rules), the task of the planner is to find a discrete sequence of intermediate postures of the body via goal reduction reasoning based on the rules along with a procedure to discover specific collision-avoidance constraints, such that any two consecutive postures are related via primitive motions of the feet, the pelvis, the torso, the head, the hands, or other body parts. Our planner also takes account of the fact that body motions are continuous by taking advantage of execution-time feedback. Planning decisions are made in the task space where our elementary spatial intuition is preserved as far as possible, only dropping down to a joint space formulation typical in robot motion planning when absolutely necessary. We claim that our work is the first serious attempt to use an AI planning paradigm for animation of human body motion

    A stochastic approach to robot plan formation

    Get PDF

    Intelligent Agent Architectures: Reactive Planning Testbed

    Get PDF
    An Integrated Agent Architecture (IAA) is a framework or paradigm for constructing intelligent agents. Intelligent agents are collections of sensors, computers, and effectors that interact with their environments in real time in goal-directed ways. Because of the complexity involved in designing intelligent agents, it has been found useful to approach the construction of agents with some organizing principle, theory, or paradigm that gives shape to the agent's components and structures their relationships. Given the wide variety of approaches being taken in the field, the question naturally arises: Is there a way to compare and evaluate these approaches? The purpose of the present work is to develop common benchmark tasks and evaluation metrics to which intelligent agents, including complex robotic agents, constructed using various architectural approaches can be subjected

    Grounding semantic cognition using computational modelling and network analysis

    Get PDF
    The overarching objective of this thesis is to further the field of grounded semantics using a range of computational and empirical studies. Over the past thirty years, there have been many algorithmic advances in the modelling of semantic cognition. A commonality across these cognitive models is a reliance on hand-engineering ā€œtoy-modelsā€. Despite incorporating newer techniques (e.g. Long short-term memory), the model inputs remain unchanged. We argue that the inputs to these traditional semantic models have little resemblance with real human experiences. In this dissertation, we ground our neural network models by training them with real-world visual scenes using naturalistic photographs. Our approach is an alternative to both hand-coded features and embodied raw sensorimotor signals. We conceptually replicate the mutually reinforcing nature of hybrid (feature-based and grounded) representations using silhouettes of concrete concepts as model inputs. We next gradually develop a novel grounded cognitive semantic representation which we call scene2vec, starting with object co-occurrences and then adding emotions and language-based tags. Limitations of our scene-based representation are identified for more abstract concepts (e.g. freedom). We further present a large-scale human semantics study, which reveals small-world semantic network topologies are context-dependent and that scenes are the most dominant cognitive dimension. This finding leads us to conclude that there is no meaning without context. Lastly, scene2vec shows promising human-like context-sensitive stereotypes (e.g. gender role bias), and we explore how such stereotypes are reduced by targeted debiasing. In conclusion, this thesis provides support for a novel computational viewpoint on investigating meaning - scene-based grounded semantics. Future research scaling scene-based semantic models to human-levels through virtual grounding has the potential to unearth new insights into the human mind and concurrently lead to advancements in artificial general intelligence by enabling robots, embodied or otherwise, to acquire and represent meaning directly from the environment

    Proceedings of the NASA Conference on Space Telerobotics, volume 1

    Get PDF
    The theme of the Conference was man-machine collaboration in space. Topics addressed include: redundant manipulators; man-machine systems; telerobot architecture; remote sensing and planning; navigation; neural networks; fundamental AI research; and reasoning under uncertainty
    corecore