38 research outputs found
Anytime coalition structure generation on synergy graphs
We consider the coalition structure generation (CSG) problem on synergy graphs, which arises in many practical applications where communication constraints, social or trust relationships must be taken into account when forming coalitions. We propose a novel representation of this problem based on the concept of edge contraction, and an innovative branch and bound approach (CFSS), which is particularly efficient when applied to a general class of characteristic functions. This new model provides a non-redundant partition of the search space, hence allowing an effective parallelisation. We evaluate CFSS on two benchmark functions, the edge sum with coordination cost and the collective energy purchasing functions, comparing its performance with the best algorithm for CSG on synergy graphs: DyCE. The latter approach is centralised and cannot be efficiently parallelised due to the exponential memory requirements in the number of agents, which limits its scalability (while CFSS memory requirements are only polynomial). Our results show that, when the graphs are very sparse, CFSS is 4 orders of magnitude faster than DyCE. Moreover, CFSS is the first approach to provide anytime approximate solutions with quality guarantees for very large systems (i.e., with more than 2700 agents
Decentralized dynamic task allocation for UAVs with limited communication range
We present the Limited-range Online Routing Problem (LORP), which involves a
team of Unmanned Aerial Vehicles (UAVs) with limited communication range that
must autonomously coordinate to service task requests. We first show a general
approach to cast this dynamic problem as a sequence of decentralized task
allocation problems. Then we present two solutions both based on modeling the
allocation task as a Markov Random Field to subsequently assess decisions by
means of the decentralized Max-Sum algorithm. Our first solution assumes
independence between requests, whereas our second solution also considers the
UAVs' workloads. A thorough empirical evaluation shows that our workload-based
solution consistently outperforms current state-of-the-art methods in a wide
range of scenarios, lowering the average service time up to 16%. In the
best-case scenario there is no gap between our decentralized solution and
centralized techniques. In the worst-case scenario we manage to reduce by 25%
the gap between current decentralized and centralized techniques. Thus, our
solution becomes the method of choice for our problem
Similarity-Based Framework for Unsupervised Domain Adaptation: Peer Reviewing Policy for Pseudo-Labeling
The inherent dependency of deep learning models on labeled data is a well-known problem and one of the barriers that slows down the integration of such methods into different fields of applied sciences and engineering, in which experimental and numerical methods can easily generate a colossal amount of unlabeled data. This paper proposes an unsupervised domain adaptation methodology that mimics the peer review process to label new observations in a different domain from the training set. The approach evaluates the validity of a hypothesis using domain knowledge acquired from the training set through a similarity analysis, exploring the projected feature space to examine the class centroid shifts. The methodology is tested on a binary classification problem, where synthetic images of cubes and cylinders in different orientations are generated. The methodology improves the accuracy of the object classifier from 60% to around 90% in the case of a domain shift in physical feature space without human labeling
Bayesian Optimization with Additive Kernels for a Stepwise Calibration of Simulation Models for Cost-Effectiveness Analysis
A critical aspect of simulation models used in cost-effectiveness analysis lies in accurately representing the natural history of diseases, requiring parameters such as probabilities and disease burden rates. While most of these parameters can be sourced from scientific literature, they often require calibration to align with the model's expected outcomes. Traditional optimization methods can be time-consuming and computationally expensive, as they often rely on simplistic heuristics that may not ensure feasible solutions. In this study, we explore using Bayesian optimization to enhance the calibration process by leveraging domain-specific knowledge and exploiting structural properties within the solution space. Specifically, we investigate the impact of additive kernel decomposition and a stepwise approach, which capitalizes on the sequential block structure inherent in simulation models. This approach breaks down large optimization problems into smaller ones without compromising solution quality. In some instances, parameters obtained using this methodology may exhibit less error than those derived from naive calibration techniques. We compare this approach with two state-of-the-art high-dimensional Bayesian Optimization techniques: SAASBO and BAxUS. Our findings demonstrate that Bayesian optimization significantly enhances the calibration process, resulting in faster convergence and improved solutions, particularly for larger simulation models. This improvement is most pronounced when combined with a stepwise calibration methodology
A Citizen Science Approach for Analyzing Social Media With Crowdsourcing
Social media have the potential to provide timely information about emergency situations and sudden events. However, finding relevant information among the millions of posts being added every day can be difficult, and in current approaches developing an automatic data analysis project requires time and technical skills. This work presents a new approach for the analysis of social media posts, based on configurable automatic classification combined with Citizen Science methodologies. The process is facilitated by a set of flexible, automatic and open-source data processing tools called the Citizen Science Solution Kit. The kit provides a comprehensive set of tools that can be used and personalized in different situations, particularly during natural emergencies, starting from images and text contained in the posts. The tools can be employed by citizen scientists for filtering, classifying, and geolocating the content with a human-in-the-loop approach to support the data analyst, including feedback and suggestions on how to configure the automated tools, and techniques to gather inputs from citizens. Using flooding scenario as a guiding example, this paper illustrates the structure and functioning of the different tools proposed to support citizens scientists in their projects, and a methodological approach to their use. The process is then validated by discussing three case studies based on the Albania earthquake of 2019, the Covid-19 pandemic, and the Thailand floods of 2021. The results suggest that a flexible approach to tools composition and configuration can support a timely setup of an analysis project by citizen scientists, especially in case of emergencies in unexpected locations.ISSN:2169-353
Solving the coalition structure generation problem on a GPU
We develop the first parallel algorithm for Coalition Structure Generation (CSG), which is central to many multi-agent systems applications. Our approach involves distributing the key steps of a dynamic programming approach to CSG across computational nodes on a Graphics Processing Unit (GPU) such that each of the thousands of threads of computation can be used to perform small computations that speed up the overall process. In so doing, we solve important challenges that arise in solving combinatorial optimisation problems on GPUs such as the efficient allocation of memory and computational threads to every step of the algorithm. In our empirical evaluations on a standard GPU, our results show an improvement of orders of magnitude over current dynamic programming approaches with an ever increasing divergence between the CPU and GPU-based algorithms in terms of growth. Thus, our algorithm is able to solve the CSG problem for 29 agents in one hour and thirty minutes as opposed to three days for the current state of the art dynamic programming algorithms
Tractable Bayesian Learning of Tree Augmented Naive Bayes Classifiers
Bayesian classifiers such as Naive Bayes or Tree Augmented Naive Bayes (TAN) have shown excellent performance given their simplicity and heavy underlying independence assumptions. In this paper we introduce a classifier taking as basis the TAN models and taking into account uncertainty in model selection. To do this we introduce decomposable distributions over TANs and show that the expression resulting from the Bayesian model averaging of TAN models can be integrated into closed form if we assume the prior probability distribution to be a decomposable distribution. This result allows for the construction of a classifier with a shorter learning time and a longer classification time than TAN. Empirical results show that the classifier is, most of the cases, more accurate than TAN and approximates better the class probabilities. 1