136 research outputs found

    On the cost-complexity of multi-context systems

    Full text link
    Multi-context systems provide a powerful framework for modelling information-aggregation systems featuring heterogeneous reasoning components. Their execution can, however, incur non-negligible cost. Here, we focus on cost-complexity of such systems. To that end, we introduce cost-aware multi-context systems, an extension of non-monotonic multi-context systems framework taking into account costs incurred by execution of semantic operators of the individual contexts. We formulate the notion of cost-complexity for consistency and reasoning problems in MCSs. Subsequently, we provide a series of results related to gradually more and more constrained classes of MCSs and finally introduce an incremental cost-reducing algorithm solving the reasoning problem for definite MCSs

    Fuzzy argumentation for trust

    No full text
    In an open Multi-Agent System, the goals of agents acting on behalf of their owners often conflict with each other. Therefore, a personal agent protecting the interest of a single user cannot always rely on them. Consequently, such a personal agent needs to be able to reason about trusting (information or services provided by) other agents. Existing algorithms that perform such reasoning mainly focus on the immediate utility of a trusting decision, but do not provide an explanation of their actions to the user. This may hinder the acceptance of agent-based technologies in sensitive applications where users need to rely on their personal agents. Against this background, we propose a new approach to trust based on argumentation that aims to expose the rationale behind such trusting decisions. Our solution features a separation of opponent modeling and decision making. It uses possibilistic logic to model behavior of opponents, and we propose an extension of the argumentation framework by Amgoud and Prade to use the fuzzy rules within these models for well-supported decisions

    Models and Methods for Plan Diagnosis

    Get PDF
    Abstract. We consider a model-based diagnosis approach to the diagnosis of plans. Here, a plan executed by some agent(s) is considered as a system to be diagnosed. We introduce a simple formal model of plans and plan execution where it is assumed that the execution of a plan can be monitored by making partial observations of plan states. These observations of plan states are used to compare them with predicted states based on (normal) plan execution. Deviations between observed and predicted states can be explained by qualifying some plan steps in the plan as behaving abnormally. A diagnosis is a subset of plan steps qualified as abnormal that can be used to restore the compatibility between the predicted and the observed partial state. In contrast to model-based diagnosis, where minimum and minimal diagnoses are preferred, we argue that in plan-based diagnosis maximum informative diagnoses should be preferred. These are diagnoses that make the strongest predictions with respect to partial states to be observed in the future. We show that in contrast to minimum diagnoses, finding a (minimal) maximum informative diagnosis can be achieved in polynomial time. Finally, we show how we can deal with diagnosis of a plan if an arbitrary sequence of partial observations is given.

    08461 Abstracts Collection -- Planning in Multiagent Systems

    Get PDF
    From the 9th of November to the 14th of November 2008 the Dagstuhl Seminar 08461 \u27`Planning in Multiagent Systems\u27\u27 was held in Schloss Dagstuhl~--~Leibniz Center for Informatics. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available

    Exploiting linkage information in real-valued optimization with the real-valued gene-pool optimal mixing evolutionary algorithm

    Get PDF
    The recently introduced Gene-pool Optimal Mixing Evolutionary Algorithm (GOMEA) has been shown to be among the state-of-the-art for solving discrete optimization problems. Key to the success of GOMEA is its ability to efficiently exploit the linkage structure of a problem. Here, we introduce the Real-Valued GOMEA (RV-GOMEA), which incorporates several aspects of the real-valued EDA known as AMaLGaM into GOMEA in order to make GOMEA well-suited for real-valued optimization. The key strength of GOMEA to competently exploit linkage structure is effectively preserved in RV-GOMEA, enabling excellent performance on problems that exhibit a linkage structure that is to some degree decomposable. Moreover, the main variation operator of GOMEA enables substantial improvements in performance if the problem allows for partial evaluations, which may be very well possible in many real-world applications. Comparisons of performance with state-of-the-art algorithms such as CMA-ES and AMaLGaM on a set of well-known benchmark problems show that RV-GOMEA achieves comparable, excellent scalability in case of black-box optimization. Moreover, RV-GOMEA achieves unprecedented scalability on problems that allow for partial evaluations, reaching near-optimal solutions for problems with up to millions of real-valued variables within one hour on a normal desktop computer

    Scalable genetic programming by gene-pool optimal mixing and input-space entropy-based building-block learning

    Get PDF
    The Gene-pool Optimal Mixing Evolutionary Algorithm (GOMEA) is a recently introduced model-based EA that has been shown to be capable of outperforming state-of-the-art alternative EAs in terms of scalability when solving discrete optimization problems. One of the key aspects of GOMEA's success is a variation operator that is designed to extensively exploit linkage models by effectively combining partial solutions. Here, we bring the strengths of GOMEA to Genetic Programming (GP), introducing GP-GOMEA. Under the hypothesis of having little problem-specific knowledge, and in an effort to design easy-to-use EAs, GP-GOMEA requires no parameter specification. On a set of well-known benchmark problems we find that GP-GOMEA outperforms standard GP while being on par with more recently introduced, state-of-the-art EAs. We furthermore introduce Input-space Entropy-based Building-block Learning (IEBL), a novel approach to identifying and encapsulating relevant building blocks (subroutines) into new terminals and functions. On problems with an inherent degree of modularity, IEBL can contribute to compact solution representations, providing a large potential for knock-on effects in performance. On the difficult, but highly modular Even Parity problem, GP-GOMEA+IEBL obtains excellent scalability, solving the 14-bit instance in less than 1 hour

    Large-scale parallelization of partial evaluations in evolutionary algorithms for real-world problems

    Get PDF
    The importance and potential of Gray-Box Optimization (GBO) with evolutionary algorithms is becoming increasingly clear lately, both for benchmark and real-world problems. We consider the GBO setting where partial evaluations are possible, meaning that sub-functions of the evaluation function are known and can be exploited to improve optimization efficiency. In this paper, we show that the efficiency of GBO can be greatly improved through large-scale parallelism, exploiting the fact that each evaluation function requires the calculation of a number of independent sub-functions. This is especially interesting for real-world problems where often the majority of the computational effort is spent on the evaluation function. Moreover, we show how the best parallelization technique largely depends on factors including the number of sub-functions and their required computation time, revealing that for different parts of the optimization the best parallelization technique should be selected based on these factors. As an illustration, we show how large-scale parallelization can be applied to optimization of high-dose-rate brachytherapy treatment plans for prostate cancer. We find that use of a modern Graphics Processing Unit (GPU) was the most efficient parallelization technique in all realistic scenari

    On the feasibility of automatically selecting similar patients in highly individualized radiotherapy dose reconstruction for historic data of pediatric cancer survivors

    Get PDF
    Purpose: The aim of this study is to establish the first step toward a novel and highly individualized three-dimensional (3D) dose distribution reconstruction method, based on CT scans and organ delineations of recently treated patients. Specifically, the feasibility of automatically selecting the CT scan of a recently treated childhood cancer patient who is similar to a given historically treated child who suffered from Wilms' tumor is assessed.Methods: A cohort of 37 recently treated children between 2- and 6-yr old are considered. Five potential notions of ground-truth similarity are proposed, each focusing on different anatomical aspects. These notions are automatically computed from CT scans of the abdomen and 3D organ delineations (liver, spleen, spinal cord, external body contour). The first is based on deformable image registration, the second on the Dice similarity coefficient, the third on the Hausdorff distance, the fourth on pairwise organ distances, and the last is computed by means of the overlap volume histogram. The relationship between typically available features of historically treated patients and the proposed ground-truth notions of similarity is studied by adopting state-of-the-art machine learning techniques, including random forest. Also, the feasibility of automatically selecting the most similar patient is assessed by comparing ground-truth rankings of similarity with predicted rankings.Results: Similarities (mainly) based on the external abdomen shape and on the pairwise organ distances are highly correlated (Pearson rp ≥ 0.70) and are successfully modeled with random forests based on historically recorded features (pseudo-R2 ≥ 0.69). In contrast, similarities based on the shape of internal organs cannot be modeled. For the similarities that random forest can reliably model, an estimation of feature relevance indicates that abdominal diameters and weight are the most important. Experiments on automatically selecting similar patients lead to coarse, yet quite robust results: the most similar patient is retrieved only 22% of the times, however, the error in worst-case scenarios is limited, with the fourth most similar patient being retrieved.Conclusions: Results demonstrate that automatically selecting similar patients is feasible when focusing on the shape of the external abdomen and on the position of internal organs. Moreover, whereas the common practice in phantom-based dose reconstruction is to select a representative phantom using age, height, and weight as discriminant factors for any treatment scenario, our analysis on abdominal tumor treatment for children shows that the most relevant features are weight and the anterior-posterior and left-right abdominal diameters
    corecore