1,474 research outputs found

    Distributed coordination in unstructured intelligent agent societies

    Get PDF
    Current research on multi-agent coordination and distributed problem solving is still not robust or scalable enough to build large real-world collaborative agent societies because it relies on either centralised components with full knowledge of the domain or pre-defined social structures. Our approach allows overcoming these limitations by using a generic coordination framework for distributed problem solving on totally unstructured environments that enables each agent to decompose problems into sub-problems, identify those which it can solve and search for other agents to delegate the sub-problems for which it does not have the necessary knowledge or resources. Regarding the problem decomposition process, we have developed two distributed versions of the Graphplan planning algorithm. To allow an agent to discover other agents with the necessary skills for dealing with unsolved sub-problems, we have created two peer-to-peer search algorithms that build and maintain a semantic overlay network that connects agents relying on dependency relationships, which improves future searches. Our approach was evaluated using two different scenarios, which allowed us to conclude that it is efficient, scalable and robust, allowing the coordinated distributed solving of complex problems in unstructured environments without the unacceptable assumptions of alternative approaches developed thus far.As abordagens actuais de coordenação multi-agente e resolução distribuída de problemas não são suficientemente robustas ou escaláveis para criar sociedades de agentes colaborativos uma vez que assentam ou em componentes centralizados com total conhecimento do domínio ou em estruturas sociais pré-definidas. A nossa abordagem permite superar estas limitações através da utilização de um algoritmo genérico de coordenação de resolução distribuída de problemas em ambientes totalmente não estruturados, o qual permite a cada agente decompor problemas em sub-problemas, identificar aqueles que consegue resolver e procurar outros agentes a quem delegar os subproblemas para os quais não tem conhecimento suficiente. Para a decomposição de problemas, criámos duas versões distribuídas do algoritmo de planeamento Graphplan. Para procurar os agentes com as capacidades necessárias à resolução das partes não resolvidas do problema, criámos dois algoritmos de procura que constroem e mantêm uma camada de rede semântica que relaciona agentes dependentes com o fim de facilitar as procuras. A nossa abordagem foi avaliada em dois cenários diferentes, o que nos permitiu concluir que ´e uma abordagem eficiente, escalável e robusta, possibilitando a resolução distribuída e coordenada de problemas complexos em ambientes não estruturados sem os pressupostos inaceitáveis em que assentava o trabalho feito até agora

    Impliance: A Next Generation Information Management Appliance

    Full text link
    ably successful in building a large market and adapting to the changes of the last three decades, its impact on the broader market of information management is surprisingly limited. If we were to design an information management system from scratch, based upon today's requirements and hardware capabilities, would it look anything like today's database systems?" In this paper, we introduce Impliance, a next-generation information management system consisting of hardware and software components integrated to form an easy-to-administer appliance that can store, retrieve, and analyze all types of structured, semi-structured, and unstructured information. We first summarize the trends that will shape information management for the foreseeable future. Those trends imply three major requirements for Impliance: (1) to be able to store, manage, and uniformly query all data, not just structured records; (2) to be able to scale out as the volume of this data grows; and (3) to be simple and robust in operation. We then describe four key ideas that are uniquely combined in Impliance to address these requirements, namely the ideas of: (a) integrating software and off-the-shelf hardware into a generic information appliance; (b) automatically discovering, organizing, and managing all data - unstructured as well as structured - in a uniform way; (c) achieving scale-out by exploiting simple, massive parallel processing, and (d) virtualizing compute and storage resources to unify, simplify, and streamline the management of Impliance. Impliance is an ambitious, long-term effort to define simpler, more robust, and more scalable information systems for tomorrow's enterprises.Comment: This article is published under a Creative Commons License Agreement (http://creativecommons.org/licenses/by/2.5/.) You may copy, distribute, display, and perform the work, make derivative works and make commercial use of the work, but, you must attribute the work to the author and CIDR 2007. 3rd Biennial Conference on Innovative Data Systems Research (CIDR) January 710, 2007, Asilomar, California, US

    An ontology for failure interpretation in automated planning and execution

    Get PDF
    This is a post-peer-review, pre-copyedit version of an article published in ROBOT - Iberian Robotics Conference. The final authenticated version is available online at: http://dx.doi.org/10.1007/978-3-030-35990-4_31”.Autonomous indoor robots are supposed to accomplish tasks, like serve a cup, which involve manipulation actions, where task and motion planning levels are coupled. In both planning levels and execution phase, several source of failures can occur. In this paper, an interpretation ontology covering several sources of failures in automated planning and also during the execution phases is introduced with the purpose of working the planning more informed and the execution prepared for recovery. The proposed failure interpretation ontological module covers: (1) geometric failures, that may appear when e.g. the robot can not reach to grasp/place an object, there is no free-collision path or there is no feasible Inverse Kinematic (IK) solution. (2) hardware related failures that may appear when e.g. the robot in a real environment requires to be re-calibrated (gripper or arm), or it is sent to a non-reachable configuration. (3) software agent related failures, that may appear when e.g. the robot has software components that fail like when an algorithm is not able to extract the proper features. The paper describes the concepts and the implementation of failure interpretation ontology in several foundations like DUL and SUMO, and presents an example showing different situations in planning demonstrating the range of information the framework can provide for autonomous robotsPeer ReviewedPostprint (author's final draft

    Robust query processing for linked data fragments

    Get PDF
    Linked Data Fragments (LDFs) refer to interfaces that allow for publishing and querying Knowledge Graphs on the Web. These interfaces primarily differ in their expressivity and allow for exploring different trade-offs when balancing the workload between clients and servers in decentralized SPARQL query processing. To devise efficient query plans, clients typically rely on heuristics that leverage the metadata provided by the LDF interface, since obtaining fine-grained statistics from remote sources is a challenging task. However, these heuristics are prone to potential estimation errors based on the metadata which can lead to inefficient query executions with a high number of requests, large amounts of data transferred, and, consequently, excessive execution times. In this work, we investigate robust query processing techniques for Linked Data Fragment clients to address these challenges. We first focus on robust plan selection by proposing CROP, a query plan optimizer that explores the cost and robustness of alternative query plans. Then, we address robust query execution by proposing a new class of adaptive operators: Polymorphic Join Operators. These operators adapt their join strategy in response to possible cardinality estimation errors. The results of our first experimental study show that CROP outperforms state-of-the-art clients by exploring alternative plans based on their cost and robustness. In our second experimental study, we investigate how different planning approaches can benefit from polymorphic join operators and find that they enable more efficient query execution in the majority of cases

    Semantics-aware planning methodology for automatic web service composition

    Get PDF
    Service-Oriented Computing (SOC) has been a major research topic in the past years. It is based on the idea of composing distributed applications even in heterogeneous environments by discovering and invoking network-available Web Services to accomplish some complex tasks when no existing service can satisfy the user request. Service-Oriented Architecture (SOA) is a key design principle to facilitate building of these autonomous, platform-independent Web Services. However, in distributed environments, the use of services without considering their underlying semantics, either functional semantics or quality guarantees can negatively affect a composition process by raising intermittent failures or leading to slow performance. More recently, Artificial Intelligence (AI) Planning technologies have been exploited to facilitate the automated composition. But most of the AI planning based algorithms do not scale well when the number of Web Services increases, and there is no guarantee that a solution for a composition problem will be found even if it exists. AI Planning Graph tries to address various limitations in traditional AI planning by providing a unique search space in a directed layered graph. However, the existing AI Planning Graph algorithm only focuses on finding complete solutions without taking account of other services which are not achieving the goals. It will result in the failure of creating such a graph in the case that many services are available, despite most of them being irrelevant to the goals. This dissertation puts forward a concept of building a more intelligent planning mechanism which should be a combination of semantics-aware service selection and a goal-directed planning algorithm. Based on this concept, a new planning system so-called Semantics Enhanced web service Mining (SEwsMining) has been developed. Semantic-aware service selection is achieved by calculating on-demand multi-attributes semantics similarity based on semantic annotations (QWSMO-Lite). The planning algorithm is a substantial revision of the AI GraphPlan algorithm. To reduce the size of planning graph, a bi-directional planning strategy has been developed

    SMT-based Abstract Temporal Planning

    Get PDF
    These are the proceedings of the International Workshop on Petri Nets and Software Engineering (PNSE’14) in Tunis, Tunisia, June 23–24, 2014. It is a co-located event of Petri Nets 2014, the 35th international conference on Applications and Theory of Petri Nets and Concurrency and ACSD 2014, the 14th International Conference on Application of Concurrency to System Design.An abstract planning is the first phase of the web service composition in the PlanICS framework. A user query specifies the initial and the expected state of a plan in request. The paper extends PlanICS with a module for temporal planning, by extending the user query with an LTL_k-X formula specifying temporal aspects of world transformations in a plan. Our solution comes together with an example, an implementation, and experimental results

    AMaχoS—Abstract Machine for Xcerpt

    Get PDF
    Web query languages promise convenient and efficient access to Web data such as XML, RDF, or Topic Maps. Xcerpt is one such Web query language with strong emphasis on novel high-level constructs for effective and convenient query authoring, particularly tailored to versatile access to data in different Web formats such as XML or RDF. However, so far it lacks an efficient implementation to supplement the convenient language features. AMaχoS is an abstract machine implementation for Xcerpt that aims at efficiency and ease of deployment. It strictly separates compilation and execution of queries: Queries are compiled once to abstract machine code that consists in (1) a code segment with instructions for evaluating each rule and (2) a hint segment that provides the abstract machine with optimization hints derived by the query compilation. This article summarizes the motivation and principles behind AMaχoS and discusses how its current architecture realizes these principles

    Incorporating an Element of Negotiation into a Service-Oriented Broker Application

    Get PDF
    The Software as a Service (SaaS) model is a service-based model in which a desired service is assembled, delivered and consumed on demand. The IBHIS broker is a ‘proof of concept’ demonstration of SaaS which is based on services that deliver data. IBHIS has addressed a number of challenges for several aspects of servicebased software, especially the concept of a ‘broker service’ and service negotiation that is only used in establishing end-user access authorizations. This thesis investigates and develops an extended form of service-based broker, called CAPTAIN (Care Planning Through Auction-based Information Negotiation). It extends the concepts and role of the broker as used in IBHIS, and in particular, it extends the service negotiation function in order to demonstrate a full range of service characteristics. CAPTAIN uses the idea of the integrated care plan from healthcare to provide a case study. A care planner acting on behalf of a patient uses the broker to negotiate with providers to produce the integrated care plan for the patient with the broker and the providers agreeing on the terms and conditions relating to the supply of the services. We have developed a ‘proof of concept’ service-oriented broker architecture for CAPTAIN that includes planning, negotiation and service-based software models to provide a flexible care planning system. The CAPTAIN application has been evaluated that focuses on three features: functions, data access and negotiation. The CAPTAIN broker performs as planned, to produce the integrated care plan. The providers’ data sources are accessed to read and write data records during and after service negotiation. The negotiation model permits the broker to interact with the providers to produce an adaptable plan, based on the client’s needs. The primary outcome is an extendable service-oriented broker architecture that can enable more scalable and flexible distributed information management by adding interaction with the data sources
    corecore