12,191 research outputs found

    REBA: A Refinement-Based Architecture for Knowledge Representation and Reasoning in Robotics

    Get PDF
    This paper describes an architecture for robots that combines the complementary strengths of probabilistic graphical models and declarative programming to represent and reason with logic-based and probabilistic descriptions of uncertainty and domain knowledge. An action language is extended to support non-boolean fluents and non-deterministic causal laws. This action language is used to describe tightly-coupled transition diagrams at two levels of granularity, with a fine-resolution transition diagram defined as a refinement of a coarse-resolution transition diagram of the domain. The coarse-resolution system description, and a history that includes (prioritized) defaults, are translated into an Answer Set Prolog (ASP) program. For any given goal, inference in the ASP program provides a plan of abstract actions. To implement each such abstract action, the robot automatically zooms to the part of the fine-resolution transition diagram relevant to this action. A probabilistic representation of the uncertainty in sensing and actuation is then included in this zoomed fine-resolution system description, and used to construct a partially observable Markov decision process (POMDP). The policy obtained by solving the POMDP is invoked repeatedly to implement the abstract action as a sequence of concrete actions, with the corresponding observations being recorded in the coarse-resolution history and used for subsequent reasoning. The architecture is evaluated in simulation and on a mobile robot moving objects in an indoor domain, to show that it supports reasoning with violation of defaults, noisy observations and unreliable actions, in complex domains.Comment: 72 pages, 14 figure

    Creating, testing and implementing a method for retrieving conversational inference with ontological semantics and defaults

    Get PDF
    Conversational inference refers to that information which is assumed to be understood by both speaker and listener in conversation. With conversational inference, a speaker makes the assumption that what is being omitted from the conversation is already known by the listener. In return, a listener assumes that the information that the listener perceives to be omitted is the same as what the speaker believes to be omitted. ^ Ontological Semantic defaults represent the information which is implied in a single event. Defaults are typically excluded from conversation unless new information is being presented or the speaker is purposefully emphasizing the default for some reason. ^ Little research has been done in the area of defaults. This thesis expands the research on defaults through the implementation and adjustment of an algorithm for default detection. ^ The investigation into default detection is broken into two phases. In the first phase, the original algorithm for default detection is implemented. This algorithm involves pulling defaults based on adjectival modifiers to an object associated with an event. Phase 2 expands the algorithm from Phase 1 to include several additional modifiers. The algorithm from Phase 2 is found to be more effective than that in Phase 1

    A RULE-BASED APPROACH TO ANIMATING MULTI-AGENT ENVIRONMENTS

    Get PDF
    This dissertation describes ESCAPE (Expert Systems in Computer Animation Production Environments), a multi-agent animation system for building domain-oriented, rule-based visual programming environments. Much recent work in computer graphics has been concerned with producing behavioural animations of artificial life-forms mainly based on algorithmic approaches. This research indicates how, by adding an inference engine and rules that describe such behaviour, traditional computer animation environments can be enhanced. The comparison between using algorithmic approaches and using a rule-based approach for representing multi-agent worlds is not based upon their respective claims to completeness, but rather on the ease with which end users may express their knowledge and control their animations with a minimum of technical knowledge. An environment for the design of computer animations incorporating an expert system approach is described. In addition to direct manipulation of objects on the screen, the environment allows users to describe behavioural rules based upon both the physical and non-physical attributes of objects. These rules can be interpreted to suggest the transition from stage to stage or to automatically produce a longer animation. The output from the system can be integrated into a commercially available 3D modelling and rendering package. Experience indicates that a hybrid environment, mixing algorithmic and rule-based approaches, would be very promising and offer benefits in application areas such as creating realistic background scenes and modelling human beings or animals either singly or in groups. A prototype evaluation system and three different domains are described and illustrated with preliminary animated images

    Types of verbal interaction with instructable robots

    Get PDF
    An instructable robot is one that accepts instruction in some natural language such as English and uses that instruction to extend its basic repertoire of actions. Such robots are quite different in conception from autonomously intelligent robots, which provide the impetus for much of the research on inference and planning in artificial intelligence. Examined here are the significant problem areas in the design of robots that learn from vebal instruction. Examples are drawn primarily from our earlier work on instructable robots and recent work on the Robotic Aid for the physically disabled. Natural-language understanding by machines is discussed as well as in the possibilities and limits of verbal instruction. The core problem of verbal instruction, namely, how to achieve specific concrete action in the robot in response to commands that express general intentions, is considered, as are two major challenges to instructability: achieving appropriate real-time behavior in the robot, and extending the robot's language capabilities

    Why Do Developers Get Password Storage Wrong? A Qualitative Usability Study

    Full text link
    Passwords are still a mainstay of various security systems, as well as the cause of many usability issues. For end-users, many of these issues have been studied extensively, highlighting problems and informing design decisions for better policies and motivating research into alternatives. However, end-users are not the only ones who have usability problems with passwords! Developers who are tasked with writing the code by which passwords are stored must do so securely. Yet history has shown that this complex task often fails due to human error with catastrophic results. While an end-user who selects a bad password can have dire consequences, the consequences of a developer who forgets to hash and salt a password database can lead to far larger problems. In this paper we present a first qualitative usability study with 20 computer science students to discover how developers deal with password storage and to inform research into aiding developers in the creation of secure password systems
    corecore