1,121 research outputs found

    Coherent Integration of Databases by Abductive Logic Programming

    Full text link
    We introduce an abductive method for a coherent integration of independent data-sources. The idea is to compute a list of data-facts that should be inserted to the amalgamated database or retracted from it in order to restore its consistency. This method is implemented by an abductive solver, called Asystem, that applies SLDNFA-resolution on a meta-theory that relates different, possibly contradicting, input databases. We also give a pure model-theoretic analysis of the possible ways to `recover' consistent data from an inconsistent database in terms of those models of the database that exhibit as minimal inconsistent information as reasonably possible. This allows us to characterize the `recovered databases' in terms of the `preferred' (i.e., most consistent) models of the theory. The outcome is an abductive-based application that is sound and complete with respect to a corresponding model-based, preferential semantics, and -- to the best of our knowledge -- is more expressive (thus more general) than any other implementation of coherent integration of databases

    Propagation techniques in WAM-based architectures : the FIDO-III approach

    Get PDF
    In this paper we develop techniques to implement finite domain constraints into the Warren Abstract Machine (WAM) to solve large combinatorial problems effciently. The WAM is the de facto standard model for compiling PROLOG. The FIDO system ("FInite Domain\u27;) provides the same functionality as the finite domain part of CHIP. The extension includes the integration of several new variable types (suspended variables, domain variables and suspended domain variables) into the WAM. The "firing conditions\u27; are lookahead and forward control schemes known from CHIP. We have developed a constraint model where the constraint is divided into constraint initialization code, constraint testing code and constraint body. Furthermore, we supply a deeply integrated WAM builtin to realize the first fail principle. Besides the summary of the important theoretical results, the specification of the compilation process in the WAM Compilation Scheme is given. We also present a simple graphical analysis method to estimate the computational burden of lookahead and forward constraints. The work is an instance of exploring finite domain consistency techniques in logic programming belonging to the FIDO lab within the ARC-TEC project

    Propagation techniques in WAM-based architectures : the FIDO-III approach

    Get PDF
    In this paper we develop techniques to implement finite domain constraints into the Warren Abstract Machine (WAM) to solve large combinatorial problems effciently. The WAM is the de facto standard model for compiling PROLOG. The FIDO system ("FInite Domain';) provides the same functionality as the finite domain part of CHIP. The extension includes the integration of several new variable types (suspended variables, domain variables and suspended domain variables) into the WAM. The "firing conditions'; are lookahead and forward control schemes known from CHIP. We have developed a constraint model where the constraint is divided into constraint initialization code, constraint testing code and constraint body. Furthermore, we supply a deeply integrated WAM builtin to realize the first fail principle. Besides the summary of the important theoretical results, the specification of the compilation process in the WAM Compilation Scheme is given. We also present a simple graphical analysis method to estimate the computational burden of lookahead and forward constraints. The work is an instance of exploring finite domain consistency techniques in logic programming belonging to the FIDO lab within the ARC-TEC project

    Naval Integration into Joint Data Strategies and Architectures in JADC2

    Get PDF
    NPS NRP Technical ReportAs Joint capabilities mature and shape into the Joint All Domain C2 Concept, Services, COCOMs and Coalition Partners will need to invest into efforts that would seamlessly integrate into Joint capabilities. The objective for the Navy is to study the options for Navy, including Naval Special Warfare Command under SOCOM, on how to integrate Navy's data strategy and architecture under the unifying JADC2 umbrella. The other objectives are to explore alternatives considered by the SOCOM and the Air Force, which are responsible for JADC2 Information Advantage and Digital Mission Command & Control. A major purpose of Joint, Services/COCOMs, agencies and Coalition Partners capabilities is to provide shared core of integrated canonical services for data, information, and knowledge with representations for vertical interoperability across all command levels and JADC2, lateral interoperability between Naval Service/COCOMs, and any combination of JADC2 constituents, agencies, and coalition partners. Our research plan is to explore available data strategy options by leveraging previous NRP work (NPS-20-N313-A). We will participate in emerging data strategy by Navy JADC2 project Overmatch. By working with MITRE our team will explore Air Force JADC2 data strategy implemented in ABMS DataOne component. Our goal is to find a seamless integration between Naval Data Strategy and data strategies behind JADC2 Information Advantage and Digital Mission Command & Control capabilities. Our plan includes studying Service-to-Service and Service-to-COCOM interoperability options required for Joint operations with a goal to minimize OODA's loop latency across sensing, situation discovery & monitoring, and knowledge understanding-for-planning, deciding, and acting. Our team realizes JADC2 requires virtual model allowing interoperability between subordinate C2 for services, agencies, and partner. Without such flexible 'joint' intersection organizational principal hierarchical structure it would be impossible to define necessary temporal and spatial fidelities for each level of organizational command required for implanting JADC2. Research deliverables will document the results of the exploration of Joint, COCOM, Agency and Partner Data Strategies approaches as JADC2 interoperability options to the emerging JADC2. We strive for standard JADC2 interface. Keywords: JADC2, ABMS, DataOne, Information Advantage, Digital Mission Command, IntegrationN2/N6 - Information WarfareThis research is supported by funding from the Naval Postgraduate School, Naval Research Program (PE 0605853N/2098). https://nps.edu/nrpChief of Naval Operations (CNO)Approved for public release. Distribution is unlimited.

    Using rules of thumb to repair inconsistent knowledge

    Get PDF

    Collaborative Diagnosis of Over-Subscribed Temporal Plans

    Get PDF
    PhD thesisOver-subscription, that is, being assigned too many tasks or requirements that are too demanding, is commonly encountered in temporal planning problems. As human beings, we often want to do more than we can, ask for things that may not be available, while underestimating how long it takes to perform each task. It is often difficult for us to detect the causes of failure in such situations and then find resolutions that are effective. We can greatly benefit from tools that assist us by looking out for these plan failures, by identifying their root causes, and by proposing preferred resolutions to these failures that lead to feasible plans. In recent literature, several approaches have been developed to resolve such over-subscribed problems, which are often framed as over-constrained scheduling, configuration design or optimal planning problems. Most of them take an all-or-nothing approach, in which over-subscription is resolved through suspending constraints or dropping goals. While helpful, in real-world scenarios, we often want to preserve our plan goals as much possible. As human beings, we know that slightly weakening the requirements of a travel plan, or replacing one of its destinations with an alternative one is often sufficient to resolve an over-subscription problem, no matter if the requirement being weakened is the duration of a deep-sea survey being planned for, or the restaurant cuisine for a dinner date. The goal of this thesis is to develop domain independent relaxation algorithms that perform this type of slight weakening of constraints, which we will formalize as continuous relaxation, and to embody them in a computational aid, Uhura, that performs tasks akin to an experienced travel agent or ocean scientists. In over-subscribed situations, Uhura helps us diagnose the causes of failure, suggests alternative plans, and collaborates with us in order to resolve conflicting requirements in the most preferred way. Most importantly, the algorithms underlying Uhura supports the weakening, instead of suspending, of constraints and variable domains in a temporally flexible plan. The contribution of this thesis is two-fold. First, we developed an algorithmic framework, called Best-first Conflict-Directed Relaxation (BCDR), for performing plan relaxation. Second, we use the BCDR framework to perform relaxation for several different families of plan representations involving different types of constraints. These include temporal constraints, chance constraints and variable domain constraints, and we incorporate several specialized conflict detection and resolution algorithms in support of the continuous weakening of them. The key idea behind BCDR's approach to continuous relaxation is to generalize the concepts of discrete conflicts and relaxations, first introduced by the model-based diagnosis community, to hybrid conflicts and relaxations, which denote minimal inconsistencies and minimal relaxations to both discrete and continuous relaxable constraints

    Construcción de planes de restauración de vías orientados a facilitar operaciones de logística humanitaria

    Get PDF
    Disruptions in the transportation network are one of the hardest consequences of a disaster. They have the potential of hampering the performance of emergency aid organizations, reducing the opportunities of saving critical victims during response and recovery phases. The strategic restoration of road network implies the prioritization of those a ected roads whose rehabilitation would reduce travel times, allowing emergency relief vehicles, civilians and restoration machines to move faster through the network. Humanitarian Road Restoration Problem (HURREP) is a relatively new topic in comparison with other research topics on disaster management. In this study, we present a mathematical model which schedules and routes restoration machines and relief vehicles working in parallel on the same network. We adopt the minimization of weighted sum of attention times to communities as the objective function, seeking for a restoration plan totally dedicated to provide support to relief plan. Among other features, our methods are able to deal with di erent relief modes working in parallel, road disruptions that are naturally removed over time (e.g. by evaporation) and vehicle-dependent starting times. We also provided an heuristic algorithm able to solve large size instances of our problem in less than the 2.7% of the runtime limit suggested by the Administrative Department for Prevention, Attention, and Recovery from Disasters in Antioquia, Colombia (DAPARD). We validated the applicability of our methods on real world disaster scenarios through a study case based on the Mojana's oods occurred in northern Colombia on the 2010-2011.MaestríaMagister en Ingeniería Industria

    A blackboard-based system for learning to identify images from feature data

    Get PDF
    A blackboard-based system which learns recognition rules for objects from a set of training examples, and then identifies and locates these objects in test images, is presented. The system is designed to use data from a feature matcher developed at R.S.R.E. Malvern which finds the best matches for a set of feature patterns in an image. The feature patterns are selected to correspond to typical object parts which occur with relatively consistent spatial relationships and are sufficient to distinguish the objects to be identified from one another. The learning element of the system develops two separate sets of rules, one to identify possible object instances and the other to attach probabilities to them. The search for possible object instances is exhaustive; its scale is not great enough for pruning to be necessary. Separate probabilities are established empirically for all combinations of features which could represent object instances. As accurate probabilities cannot be obtained from a set of preselected training examples, they are updated by feedback from the recognition process. The incorporation of rule induction and feedback into the blackboard system is achieved by treating the induced rules as data to be held on a secondary blackboard. The single recognition knowledge source effectively contains empty rules which this data can be slotted into, allowing it to be used to recognise any number of objects - there is no need to develop a separate knowledge source for each object. Additional object-specific background information to aid identification can be added by the user in the form of background checks to be carried out on candidate objects. The system has been tested using synthetic data, and successfully identified combinations of geometric shapes (squares, triangles etc.). Limited tests on photographs of vehicles travelling along a main road were also performed successfully
    corecore