17 research outputs found

    Managing distributed situation awareness in a team of agents

    Get PDF
    The research presented in this thesis investigates the best ways to manage Distributed Situation Awareness (DSA) for a team of agents tasked to conduct search activity with limited resources (battery life, memory use, computational power, etc.). In the first part of the thesis, an algorithm to coordinate agents (e.g., UAVs) is developed. This is based on Delaunay triangulation with the aim of supporting efficient, adaptable, scalable, and predictable search. Results from simulation and physical experiments with UAVs show good performance in terms of resources utilisation, adaptability, scalability, and predictability of the developed method in comparison with the existing fixed-pattern, pseudorandom, and hybrid methods. The second aspect of the thesis employs Bayesian Belief Networks (BBNs) to define and manage DSA based on the information obtained from the agents' search activity. Algorithms and methods were developed to describe how agents update the BBN to model the system’s DSA, predict plausible future states of the agents’ search area, handle uncertainties, manage agents’ beliefs (based on sensor differences), monitor agents’ interactions, and maintains adaptable BBN for DSA management using structural learning. The evaluation uses environment situation information obtained from agents’ sensors during search activity, and the results proved superior performance over well-known alternative methods in terms of situation prediction accuracy, uncertainty handling, and adaptability. Therefore, the thesis’s main contributions are (i) the development of a simple search planning algorithm that combines the strength of fixed-pattern and pseudorandom methods with resources utilisation, scalability, adaptability, and predictability features; (ii) a formal model of DSA using BBN that can be updated and learnt during the mission; (iii) investigation of the relationship between agents search coordination and DSA management

    On the enhancement of Big Data Pipelines through Data Preparation, Data Quality, and the distribution of Optimisation Problems

    Get PDF
    Nowadays, data are fundamental for companies, providing operational support by facilitating daily transactions. Data has also become the cornerstone of strategic decision-making processes in businesses. For this purpose, there are numerous techniques that allow to extract knowledge and value from data. For example, optimisation algorithms excel at supporting decision-making processes to improve the use of resources, time and costs in the organisation. In the current industrial context, organisations usually rely on business processes to orchestrate their daily activities while collecting large amounts of information from heterogeneous sources. Therefore, the support of Big Data technologies (which are based on distributed environments) is required given the volume, variety and speed of data. Then, in order to extract value from the data, a set of techniques or activities is applied in an orderly way and at different stages. This set of techniques or activities, which facilitate the acquisition, preparation, and analysis of data, is known in the literature as Big Data pipelines. In this thesis, the improvement of three stages of the Big Data pipelines is tackled: Data Preparation, Data Quality assessment, and Data Analysis. These improvements can be addressed from an individual perspective, by focussing on each stage, or from a more complex and global perspective, implying the coordination of these stages to create data workflows. The first stage to improve is the Data Preparation by supporting the preparation of data with complex structures (i.e., data with various levels of nested structures, such as arrays). Shortcomings have been found in the literature and current technologies for transforming complex data in a simple way. Therefore, this thesis aims to improve the Data Preparation stage through Domain-Specific Languages (DSLs). Specifically, two DSLs are proposed for different use cases. While one of them is a general-purpose Data Transformation language, the other is a DSL aimed at extracting event logs in a standard format for process mining algorithms. The second area for improvement is related to the assessment of Data Quality. Depending on the type of Data Analysis algorithm, poor-quality data can seriously skew the results. A clear example are optimisation algorithms. If the data are not sufficiently accurate and complete, the search space can be severely affected. Therefore, this thesis formulates a methodology for modelling Data Quality rules adjusted to the context of use, as well as a tool that facilitates the automation of their assessment. This allows to discard the data that do not meet the quality criteria defined by the organisation. In addition, the proposal includes a framework that helps to select actions to improve the usability of the data. The third and last proposal involves the Data Analysis stage. In this case, this thesis faces the challenge of supporting the use of optimisation problems in Big Data pipelines. There is a lack of methodological solutions that allow computing exhaustive optimisation problems in distributed environments (i.e., those optimisation problems that guarantee the finding of an optimal solution by exploring the whole search space). The resolution of this type of problem in the Big Data context is computationally complex, and can be NP-complete. This is caused by two different factors. On the one hand, the search space can increase significantly as the amount of data to be processed by the optimisation algorithms increases. This challenge is addressed through a technique to generate and group problems with distributed data. On the other hand, processing optimisation problems with complex models and large search spaces in distributed environments is not trivial. Therefore, a proposal is presented for a particular case in this type of scenario. As a result, this thesis develops methodologies that have been published in scientific journals and conferences.The methodologies have been implemented in software tools that are integrated with the Apache Spark data processing engine. The solutions have been validated through tests and use cases with real datasets

    JFPC 2019 - Actes des 15es Journées Francophones de Programmation par Contraintes

    Get PDF
    National audienceLes JFPC (JournĂ©es Francophones de Programmation par Contraintes) sont le principal congrĂšs de la communautĂ© francophone travaillant sur les problĂšmes de satisfaction de contraintes (CSP), le problĂšme de la satisfiabilitĂ© d'une formule logique propositionnelle (SAT) et/ou la programmation logique avec contraintes (CLP). La communautĂ© de programmation par contraintes entretient Ă©galement des liens avec la recherche opĂ©rationnelle (RO), l'analyse par intervalles et diffĂ©rents domaines de l'intelligence artificielle.L'efficacitĂ© des mĂ©thodes de rĂ©solution et l'extension des modĂšles permettent Ă  la programmation par contraintes de s'attaquer Ă  des applications nombreuses et variĂ©es comme la logistique, l'ordonnancement de tĂąches, la conception d'emplois du temps, la conception en robotique, l'Ă©tude du gĂ©nĂŽme en bio-informatique, l'optimisation de pratiques agricoles, etc.Les JFPC se veulent un lieu convivial de rencontres, de discussions et d'Ă©changes pour la communautĂ© francophone, en particulier entre doctorants, chercheurs confirmĂ©s et industriels. L'importance des JFPC est reflĂ©tĂ©e par la part considĂ©rable (environ un tiers) de la communautĂ© francophone dans la recherche mondiale dans ce domaine.PatronnĂ©es par l'AFPC (Association Française pour la Programmation par Contraintes), les JFPC 2019 ont lieu du 12 au 14 Juin 2019 Ă  l'IMT Mines Albi et sont organisĂ©es par Xavier Lorca (prĂ©sident du comitĂ© scientifique) et par Élise Vareilles (prĂ©sidente du comitĂ© d'organisation)

    A review of literature on parallel constraint solving

    Get PDF
    As multicore computing is now standard, it seems irresponsible for constraints researchers to ignore the implications of it. Researchers need to address a number of issues to exploit parallelism, such as: investigating which constraint algorithms are amenable to parallelisation; whether to use shared memory or distributed computation; whether to use static or dynamic decomposition; and how to best exploit portfolios and cooperating search. We review the literature, and see that we can sometimes do quite well, some of the time, on some instances, but we are far from a general solution. Yet there seems to be little overall guidance that can be given on how best to exploit multicore computers to speed up constraint solving. We hope at least that this survey will provide useful pointers to future researchers wishing to correct this situation

    A distributed optimization method for the geographically distributed data centres problem

    Get PDF
    The geographically distributed data centres problem (GDDC) is a naturally distributed resource allocation problem. The problem involves allocating a set of virtual machines (VM) amongst the data centres (DC) in each time period of an operating horizon. The goal is to optimize the allocation of workload across a set of DCs such that the energy cost is minimized, while respecting limitations on data centre capacities, migrations of VMs, etc. In this paper, we propose a distributed optimization method for GDDC using the distributed constraint optimization (DCOP) framework. First, we develop a new model of the GDDC as a DCOP where each DC operator is represented by an agent. Secondly, since traditional DCOP approaches are unsuited to these types of large-scale problem with multiple variables per agent and global constraints, we introduce a novel semi-asynchronous distributed algorithm for solving such DCOPs. Preliminary results illustrate the benefits of the new method

    Multi-Variable Agents Decomposition for DCOPs to Exploit Multi-Level Parallelism

    No full text
    Current DCOP algorithms suffer from a major limiting assumption—each agent can handle only a single variable of the problem—which limits their scalability. This paper proposes a novel Multi-Variable Agent (MVA) DCOP decomposition, which: (i) Exploits co-locality of an agent’s variables, allowing us to adopt efficient centralized techniques; (ii) Enables the use of hierarchical parallel models, such us those based on GPGPUs; and (iii) Empirically reduces the amount of communication required in several classes of DCOP algorithms. Experimental results show that our MVA decomposition outperforms non-decomposed DCOP algorithms, in terms of network load and scalability
    corecore