62,085 research outputs found

    Bayesian Discovery of Multiple Bayesian Networks via Transfer Learning

    Full text link
    Bayesian network structure learning algorithms with limited data are being used in domains such as systems biology and neuroscience to gain insight into the underlying processes that produce observed data. Learning reliable networks from limited data is difficult, therefore transfer learning can improve the robustness of learned networks by leveraging data from related tasks. Existing transfer learning algorithms for Bayesian network structure learning give a single maximum a posteriori estimate of network models. Yet, many other models may be equally likely, and so a more informative result is provided by Bayesian structure discovery. Bayesian structure discovery algorithms estimate posterior probabilities of structural features, such as edges. We present transfer learning for Bayesian structure discovery which allows us to explore the shared and unique structural features among related tasks. Efficient computation requires that our transfer learning objective factors into local calculations, which we prove is given by a broad class of transfer biases. Theoretically, we show the efficiency of our approach. Empirically, we show that compared to single task learning, transfer learning is better able to positively identify true edges. We apply the method to whole-brain neuroimaging data.Comment: 10 page

    Algorithms for Graph-Constrained Coalition Formation in the Real World

    Get PDF
    Coalition formation typically involves the coming together of multiple, heterogeneous, agents to achieve both their individual and collective goals. In this paper, we focus on a special case of coalition formation known as Graph-Constrained Coalition Formation (GCCF) whereby a network connecting the agents constrains the formation of coalitions. We focus on this type of problem given that in many real-world applications, agents may be connected by a communication network or only trust certain peers in their social network. We propose a novel representation of this problem based on the concept of edge contraction, which allows us to model the search space induced by the GCCF problem as a rooted tree. Then, we propose an anytime solution algorithm (CFSS), which is particularly efficient when applied to a general class of characteristic functions called m+am+a functions. Moreover, we show how CFSS can be efficiently parallelised to solve GCCF using a non-redundant partition of the search space. We benchmark CFSS on both synthetic and realistic scenarios, using a real-world dataset consisting of the energy consumption of a large number of households in the UK. Our results show that, in the best case, the serial version of CFSS is 4 orders of magnitude faster than the state of the art, while the parallel version is 9.44 times faster than the serial version on a 12-core machine. Moreover, CFSS is the first approach to provide anytime approximate solutions with quality guarantees for very large systems of agents (i.e., with more than 2700 agents).Comment: Accepted for publication, cite as "in press

    Q-Strategy: A Bidding Strategy for Market-Based Allocation of Grid Services

    Get PDF
    The application of autonomous agents by the provisioning and usage of computational services is an attractive research field. Various methods and technologies in the area of artificial intelligence, statistics and economics are playing together to achieve i) autonomic service provisioning and usage of Grid services, to invent ii) competitive bidding strategies for widely used market mechanisms and to iii) incentivize consumers and providers to use such market-based systems. The contributions of the paper are threefold. First, we present a bidding agent framework for implementing artificial bidding agents, supporting consumers and providers in technical and economic preference elicitation as well as automated bid generation by the requesting and provisioning of Grid services. Secondly, we introduce a novel consumer-side bidding strategy, which enables a goal-oriented and strategic behavior by the generation and submission of consumer service requests and selection of provider offers. Thirdly, we evaluate and compare the Q-strategy, implemented within the presented framework, against the Truth-Telling bidding strategy in three mechanisms – a centralized CDA, a decentralized on-line machine scheduling and a FIFO-scheduling mechanisms

    Decentralized Cooperative Planning for Automated Vehicles with Continuous Monte Carlo Tree Search

    Full text link
    Urban traffic scenarios often require a high degree of cooperation between traffic participants to ensure safety and efficiency. Observing the behavior of others, humans infer whether or not others are cooperating. This work aims to extend the capabilities of automated vehicles, enabling them to cooperate implicitly in heterogeneous environments. Continuous actions allow for arbitrary trajectories and hence are applicable to a much wider class of problems than existing cooperative approaches with discrete action spaces. Based on cooperative modeling of other agents, Monte Carlo Tree Search (MCTS) in conjunction with Decoupled-UCT evaluates the action-values of each agent in a cooperative and decentralized way, respecting the interdependence of actions among traffic participants. The extension to continuous action spaces is addressed by incorporating novel MCTS-specific enhancements for efficient search space exploration. The proposed algorithm is evaluated under different scenarios, showing that the algorithm is able to achieve effective cooperative planning and generate solutions egocentric planning fails to identify

    UVSD: Software for Detection of Color Underwater Features

    Get PDF
    Underwater Video Spot Detector (UVSD) is a software package designed to analyze underwater video for continuous spatial measurements (path traveled, distance to the bottom, roughness of the surface etc.) Laser beams of known geometry are often used in underwater imagery to estimate the distance to the bottom. This estimation is based on the manual detection of laser spots which is labor intensive and time consuming so usually only a few frames can be processed this way. This allows for spatial measurements on single frames (distance to the bottom, size of objects on the sea-bottom), but not for the whole video transect. We propose algorithms and a software package implementing them for the semi-automatic detection of laser spots throughout a video which can significantly increase the effectiveness of spatial measurements. The algorithm for spot detection is based on the Support Vector Machines approach to Artificial Intelligence. The user is only required to specify on certain frames the points he or she thinks are laser dots (to train an SVM model), and then this model is used by the program to detect the laser dots on the rest of the video. As a result the precise (precision is only limited by quality of the video) spatial scale is set up for every frame. This can be used to improve video mosaics of the sea-bottom. The temporal correlation between spot movements changes and their shape provides the information about sediment roughness. Simultaneous spot movements indicate changing distance to the bottom; while uncorrelated changes indicate small local bumps. UVSD can be applied to quickly identify and quantify seafloor habitat patches, help visualize habitats and benthic organisms within large-scale landscapes, and estimate transect length and area surveyed along video transects

    Energetics of the brain and AI

    Full text link
    Does the energy requirements for the human brain give energy constraints that give reason to doubt the feasibility of artificial intelligence? This report will review some relevant estimates of brain bioenergetics and analyze some of the methods of estimating brain emulation energy requirements. Turning to AI, there are reasons to believe the energy requirements for de novo AI to have little correlation with brain (emulation) energy requirements since cost could depend merely of the cost of processing higher-level representations rather than billions of neural firings. Unless one thinks the human way of thinking is the most optimal or most easily implementable way of achieving software intelligence, we should expect de novo AI to make use of different, potentially very compressed and fast, processes
    • …
    corecore